The fight against conspiracy theories and other fake news about the coronavirus crisis is receiving more help from the social media and other tech platforms, as a number of thought leaders have argued. However, in my opinion, a more important factor has been more successful outreach by governmental, industry, and academic researchers. Too often, the research community has been too complacent about getting the results of their research to opinion leaders and the broader public. Years ago, I argued that too many scientists held a ‘trickle down’ theory of information dissemination. Once they publish their research, their job is done, and others will read and disseminate their findings.
Even today, too many researchers and scientists are complacent about outreach. They are too focused on publication and communication with their peers and see outreach as someone else’s job. The coronavirus crisis has demonstrated that governments and mainstream, leading researchers, can get their messages out if they work hard to do so. In the UK, the Prime Minister’s TV address, and multiple press conferences have been very useful – the last reaching 27 million viewers in the UK, becoming one of the ‘most watched TV programmes ever’, according to The Guardian. In addition, the government distributed a text message to people across the UK about it rules during the crisis. And leading scientists have been explaining their findings, research, and models to the public, with the support of broadcasters and social media.
If scientists and other researchers are complacent, they can surrender the conversation to creative and motivated conspiracy theorists and fake news outlets. In the case of Covid-19, it seems to be that a major push by the scientific community of researchers and governmental experts and politicians has shown that reputable sources can be heard over and amongst the crowd of rumors and less authoritative information. Rather than try to censor or suppress social media, we need to step-up efforts by mainstream scientific communities to reach out to the public and political opinion leaders. No more complacency. It should not take a global pandemic crisis to see that this can and should be done.
 Marietje Schaake (2020), ‘Now we know Big Tech can tackle the ‘infdemic’ of fake news’, Financial Times, 25 March: p. 23.
 Dutton, W. (1994), ‘Trickle-Down Social Science: A Personal Perspective,’ Social Sciences, 22, 2.
Times have changed. In the early years of my career as an academic, the poster session used to be sort of a second class offer for presenting at an academic conference. That is no longer the case. Newer generations of academics are trained and attuned to creating posters and infographics to explain and communicate their work. In many cases, it seems like the poster and poster sessions are the preferred mode of presentation, such as compared to sitting on a panel or making a traditional presentation of an academic paper, which is often a set of slides that could be incorporated into a poster.
Anecdotally, I have seen the rising prominence of poster sessions across a wide range of academic conferences I’ve attended over the years, in communication, political science, computer science, and communication policy, such as TPRC. For example, it is increasingly common for a time slot of a conference to be devoted to poster sessions, and not compete with other presentations. I can also see a leap in the sophistication and visualization quality evident in poster sessions. More software, templates, training, and guidelines are being developed to refine posters in an increasingly competitive field.
Younger academics are more attuned to the creation of posters, but I am sure they will continue to develop them as they rise in the academic ranks. I think it is more of a cohort issue than a status issue in academia. But think of the added value of poster sessions to the presenters and their audiences.
From the presenter’s perspective, rather than have one shot to stand in front of a large audience to formally present a paper, they can have multiple opportunities to present the same material to smaller groups or even a single individual. All presentations help you refine your ideas and the logic of your argument, so I would think multiple iterations are even more beneficial. And aware presenters can gauge their presentation to the particular interests and questions of the specific audience they have at the moment. It is wonderful when a member of the audience introduces themselves to you after a panel, but you can introduce your self to many more individuals and network in more effective ways in smaller sessions.
From the audience’s perspective, everyone has been in an academic presentation that did not meet one’s expectations. They misunderstood the title, or came for another paper, and were polite enough to listen to others. But in the case of a poster session, audiences stroll through rows of posters and are able to locate particular topics and presentations of genuine interest. Moreover, the opportunity for some serendipity, finding interest in a topic you had not previously considered, is far more likely. Presenters can spend a few or many minutes not only listening but discussing the topic with the audience. It is truly an efficient as well as an effective presentational style.
Shame on me for not proposing a poster yet in my career. But I am not so blind that I cannot see that the poster has risen as a medium for academic communication and increasingly as a preferred rather than a second choice for leading academics. Universities and research institutes need to support students and faculty who choose this option.
Here is a nice example of a useful, infographic packed poster via Chris Bode’s Twitter:
It is common to debate the definition and correct implementation of the Chatham House Rule. My issue is with its over-use. It should be used in exceptional cases, rather than being routinized as a norm for managing communication about meetings.
To be clear, the Chatham House Rule (singular) is: “When a meeting, or part thereof, is held under the Chatham House Rule, participants are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed.”*
One of the central rationales of this rule was to enable more transparency by freeing governmental and other officials to speak without attribution.** Clearly, there are cases in which individuals cannot speak publicly about an issue given their position. Think about the many cases in which news sources do not wish to be identified by journalists. Similar situations arise in meetings, and it is good that The Chatham House Rule exists to use in just such occasions to promote greater transparency.
However, it is arguable that The Chatham House Rule is used in ways that do not promote transparency. For example, it is often misunderstood and used to prevent members of a meeting from conveying information provided at the meeting. Clearly, the original rule left participants ‘free to use the information’, just without identifying the source. This expansion of the Rule runs counter to the aim of the rule’s establishment.
In addition, all too often the Rule is invoked not because the content of a meeting is particularly sensitive, but because it creates a sense of tradition, and an aura of importance. It conveys the message that something important will be discussed at this meeting. However, the function of this is more in marketing a meeting rather than creating a safe setting for revealing secret, confidential, or new information.
A related rationale is that it is just ‘the way we do things’ – the tradition. In this case, there is likely to be no need for less transparency, but a case of blindly following tradition, resulting in information being inadvertently suppressed.
In many ways, the times are making The Chatham House Rule more problematic.
First, history is pushing us toward more transparency, not less. The spirit of the Rule should lead us to apply it only when necessary to open communication, such as around a sensitive issue, not to routinely regulate discussion of what was said in a meeting.
Secondly, the authenticity of information that comes out of a meeting is often enhanced by knowing more information about its source. If a new idea or piece of information is attributed to an individual, that individual can become a first source for authenticating what was said, and for follow up questions.
Thirdly, technical advances are making it less and less realistic to keep the source of information confidential. Leaks, recordings, live blogging and more are making transparency the norm of nearly every meeting. That is, it is better to assume that any meeting is public than to assume any meeting is confidential.
Over a decade ago, I once organized and chaired a meeting that included the UK’s Information Commissioner (the privacy commissioner, if you will), and it was conducted under The Chatham House Rule. At the break, I checked with my IT group about how the recording was going, as we were recording the meeting for preparing a discussion paper to follow. Lo and behold, the meeting was being Webcast! This made for a good laugh by the Commissioner and all when we reconvened, but it also reminded me that everyone should assume the default of a meeting in the digital world is that all is public rather than private.
Finally, there are better ways to handle information in today’s technical and political contexts. Personally, I usually record meetings that are about academic or applied matters, as opposed to meetings about personnel issues, for example. So if we convene a group to discuss a substantive issue, such as a digital policy issue like net neutrality, we let all participants know that presentations and discussions will be recorded. We do not promise that anything will be confidential, as it is not completely under our control, but we promise that our recording will be used primarily for writing up notes of the meeting, and that if anyone is quoted, they will be asked to approve the quote before it is distributed publicly.
Of course, when individuals request that something remains confidential, or confined to those present, then we do everything we can to ensure that confidentiality. (As with The Chatham House Rule, much relies on trust among the participants in a meeting.) But this restriction is the exception, rather than the rule. This process tends to ensure more accurate reports of meetings, enable us to quote individuals, who should get credit or attribution, and support transparency.
The Chatham House Rule was established in 1927 with Chatham House being the UK’s Royal Institute of International Affairs. The worries at that time were more often about encouraging government officials to participate in a discussion about sensitive international concerns by assuring anonymity. Today there are still likely to be occasions when this rule could be useful in bringing people around the table, but that is likely to be exception and not the rule in the era of the Internet, distributed electronic conferencing, and live Tweeting.
** As noted by Chatham House: “The Chatham House Rule originated at Chatham House with the aim of providing anonymity to speakers and to encourage openness and the sharing of information. It is now used throughout the world as an aid to free discussion.” https://www.chathamhouse.org/about/chatham-house-rule
Wonderful to see a chapter by me, Frank Hangler, and Ginette Law, entitled ‘Broadening Conceptions of Mobile and Its Social Dynamics’ in Chan, J. M., and Lee, F. L. F. (2017), Advancing Comparative Media and Communication Research (London: Routledge), pp. 142-170. It arrived at my office today.
The volume evolved out of an international conference to mark the 50th anniversary of the School of Journalism and Communication at the Chinese University of Hong Kong in 2015. But the paper’s origins date back to a project that I did during my last months at Oxford in 2014, and early in my tenure at MSU, as the Principal Investigator with Ginette and Frank, of a project called ‘The Social Shaping of Mobile Internet Developments and their Implications for Evolving Lifestyles’, supported by a contract from Huawei Technologies Co. Ltd to Oxford University Consulting. This led first to a working paper done jointly with colleagues from Oxford University and Huawei: Dutton, William H. and Law, Ginette and Groselj, Darja and Hangler, Frank and Vidan, Gili and Cheng, Lin and Lu, Xiaobin and Zhi, Hui and Zhao, Qiyong and Wang, Bin, Mobile Communication Today and Tomorrow (December 4, 2014). A Quello Policy Research Paper, Quello Center, Michigan State University.. Available at SSRN: http://ssrn.com/abstract=2534236 or http://dx.doi.org/10.2139/ssrn.2534236
The project moved me into a far better understanding and appreciation of the significance of mobile, but also its varied and evolving definitions. Before this paper, I was skeptical of academic work centered on mobile as I considered it one area of Internet studies. However, by the end of the project, I became convinced that mobile communication is a useful and complex area for research, policy and practice, complementary to Internet studies. In the working paper, we forecast the disappearance of the mobile phone device, which seemed far-fetched when we suggested this to Huawei, but is now becoming a popular conception. So look forward to a future in which that awkward scene of people walking along looking at their mobile will come to an end, in a good way.
This paper illustrates the often circuitous route of academic work from conception to publication, which is increasingly international and collaborative. So thanks to the editors, my co-authors, Oxford Consulting, and Huawei for your support and patience. Academic time is another world. But it was all worth doing and the wait.
On my last trip to China, I was meeting with a former social science colleague at Tsinghua University, Professor JIN Jianbin, who received a new research grant to study public perspectives on science, such as around research on genetically modified crops. Our conversation about genetically modified organisms (GMOs) quickly touched on a variety of other issues, such as the public’s acceptance of research on climate change, on which sizeable proportions of the public in China, the US and other nations often dismiss, if not distrust, scientific opinion.
Of course, some level of public distrust of scientific authorities is not new. I recall some famous work by political scientists in the US who studied the politics of conspiracy theories around the fluoridation of water that was prominent across American communities since the 1950s, but which – surprisingly – carries on to this day. So while it is not new, distrust of the political motivations behind scientific opinion is arguably growing.
Some indicators have suggested that diffuse public support for scientific institutions is not declining. However, there is some limited and more recent evidence that universities and academics are being perceived as more partisan. And anecdotally, science is increasingly questioned as biased by researchers who are claimed to be in the pockets of the sponsors of their research, illustrated by controversies over pharmaceutical research.
Such assaults on the integrity of science have led universities and research institutions to place a higher priority on the prevention and detection of conflicts of interest rising in the conduct of research. Finally, symptoms of this growing distrust seem evident in the divisions over a rising number of issues, with GMOs, climate change, vaccinations, and evolution, being among the more prominent. Perhaps the controversies surrounding science simply reflect the many issues that have broad public implications, such as for the digital economy or public health, while issues such as the moon landing were more removed from immediate public impact on the redistribution of resources.
The bad news is that these controversies are likely to slow progress, such as on efforts to reduce man made climate change. In some cases these controversies are dangerous, such as in leading parents not to vaccinate their school children.
However, there might be some positive outcomes here, if not good news. One positive outcome of this developing problem might be that scientists will place a greater priority on better explaining their work to a wider public. Already, the study of science communication is a burgeoning field around the world, illustrated by new research being launched by my colleague JIN Jianbin, Professor of Journalism and Communication at Tsinghua University in Beijing. And an increasing number of research councils and foundations stress the importance of public outreach.
Of course, scientists explain their research findings and their implications as a matter of practice. Not to be forgotten or dismissed is perhaps the most effective albeit long-term form of science communication, which is teaching in colleges and universities. Yet there are questions about whether top scientists, whatever their field, are as closely involved in teaching as they could be. For example, my former university, the University of Southern California, placed a priority on putting top senior scholars into the entry level undergraduate courses, which I thought was brilliant, but which is exceptional.
But arguably, most communication about scientific issues remains focused on peer-to-peer rather than public facing communication. Peer-to-peer communication is conducted through journal publications and academic conferences and presentations. And when public facing, it is often limited to top-down or what I have called ‘trickle-down’ science, with scientists expecting their publications to be read and interpreted by others, and not themselves – the primary researchers.
However, and here I could be wrong, it seems that the worse possible development might be what I see as a trend toward scientific persuasion, often based on appeals to authority and scientific consensus or by lobbying, such as through petitions, rather than by effective communication of research. Any scientist is quick to dismiss or place less credibility in appeals to authority. Why should the public be different? Where is the evidence? And once scientists move into the role of a lobbyist, petitioner, or activist, they diminish their credibility as scientists or researchers. Surely this kind of context collapse, when a scientist becomes political, or a doctor runs for a political office, invites the public to view scientists and academics as partisan political actors rather than scientific actors, and see them in ways that parallel other political actors and lobbyists.
How can scientists explain their work to a larger public? First, they need to recognize the need and value of effectively communicating their work to a broader public. This aim is rising across academia, such as in research councils insisting on research including components on outreach, and academic quality being judged increasingly by its impact. Unfortunately, this can sometimes drift into a tick box exercise in budgeting for conferences and seminars involving business and industry and the government, while serious efforts to communicate to the general public with an interest in the topic needs to be tackled directly. Academics need to guard against this tick box mentality.
Another concern is that this need for public outreach might simply lead to a greater focus on media coverage, getting the press to pick up stories on a scientist’s research. There is nothing wrong with this, universities love such coverage, and it can be helpful, but news coverage is generally overly simplistic, too often misleading, and potentially adding to the problems confronting good scientific communication. Researchers need to hold journalists and the media more accountable, and address inaccuracies or overly simplified messages in the press, cable news shows, and mass media.
Another, and a possibly more effective and more recently practical approach, is to communicate directly to the public. Join the conversation. Write reports on your research findings that are understandable to those in the educated public that might be seriously interested in your work now or in the future. You can reach opinion leaders in your areas of research, and thereby foster effective two-step flows of communication to the general public. Don’t worry about a mass audience, but aim to reach a targeted audience of those with a serious interest in your topic. When they search online for information about your topic, make sure that accessible presentations of your research will be found.
Unfortunately, too many academics are taught not to join the conversation, and to avoid blogging or writing for a general audience. Instead, they are taught to focus more than ever on only reaching the top peer reviewed journals in their field and being read and cited by their peers. As noted above, this too often leads to a weak form of trickle down science, which is not in the long-term interest of the scientific enterprise.
We should question this conventional wisdom in academia. Personally, I don’t believe there is a necessary risk to scientific publishing by also trying to communicate to a more general audience. That is what teachers do, and when researchers try to teach and communicate with their students, they can find problems with their arguments, and ways to improve how they convey their ideas.
So – scientists – offer up your best ideas to the public, not as your peers, but as smart and educated individuals who do not know about your work – even why it is relevant. Some of my most meaningful experiences with communication about my research have been exactly when I – focused on Internet studies – sat next to a physicist or mathematician over a meal who asked me about my research and vice versa. What am I working on? Why is it important? If we can do this over lunch or dinner, we can do it for a larger public online.
Perhaps this is more difficult than it sounds, but we need to accept the challenge. Arguably, the scientific challenge of the 21st century is effective communication to the larger public.
I’ve argued on this blog that the idea of enabling the press to ask questions from outside the White House Press Office, in fact, outside the Washington DC Beltway, was a good idea. Some anecdotal evidence is being reported that the strategy is working. USA Today reported that over 13 White House press briefings, Sean Spicer has taken questions ‘from 32 outside-the-Beltway outlets’. This is a great example of using the Internet to enable more distributed participation. The Washington press is obviously defensive when people complain about the ‘media bubble’ in the briefing room, but the potential for what was once called ‘pack journalism’ is real, and location matters. Geographically distributing contributions is symbolically and materially opening the briefings up to more diversity of viewpoints and issues.
Inevitably, more voices means more competition among the journalists in asking questions. But there are already too many in the room, and why it is fair to give more access to the outlets that can afford to station staff in Washington DC is not clear to me. That said, the Skype seats will always be the cheap seats, and be less likely to get their turn in the question and answer sessions.
Every year in the US, and at various intervals in other countries, academics must pull together what they have done to provide administrators with the data required for their indicators of performance. Just as metrics provided baseball teams with a new tool for more systematically choosing players, based on their stats, as portrayed in the popular film Moneyball, so universities hope to improve their performance and rankings by relying more on metrics rather than the intuitions of faculty. Metrics are indeed revolutionizing the selection, promotion, and retention of academics, and units within universities. Arguably, they already have done so. The recruitment process increasingly looks at various scores and stats about any given candidate for any academic position.
Individual academics can’t do much about it. And increasingly, the metrics will be collected without the academic even doing any data gathering, as data on citations, publications, and teaching ratings get generated in the course of being an academic. Academic metrics are becoming one more mountain of big data ready for computational analysis.
I am too senior (old) to be worried about my own metrics. They are not great, but they are as good as they will ever be. My concern is most often with administrators tending to count everything that can be counted, rather than trying to develop indicators that get to the heart of academic performance. Of course, this is extremely difficult since academics seldom agree on the rating of their colleagues. A scholar who is a superstar to one academic is conceptually dead from another academic’s perspective. So this controversy is one of many factors driving academia towards more indicators or hard evidence of performance. The judgments of scholars vary so dramatically. At least by counting what can be counted, there is some harder evidence that might be indicative of what we try to measure – quality.
So what can we count? It varies by university, but I’ve been in universities that count publications, of course, but every kind of publication, from refereed journal articles to blogs. And each of these might be rated, such as by the status of the journal in which an article appears, or the prestige of the publisher of a book. But that is only the beginning. We count citations, conference papers, talks, committees, awards, and more. Therefore, we perennially worry about whether we published enough in the right places, and did enough of anything that is counted.
In the UK, there has been an effort to measure the impact of an academic’s work. There have been entire conferences and publications devoted to what could be meant by impact and how it could be measured. Arguably, this is a well intentioned move toward measuring something more meaningful. Rather than simply counting the number of publications (output), why not try to gauge the impact (outcomes) of the work? It is just that it is difficult to reliably and validly measure impact, given that the lag between academic work and its impact can be years or decades. Take Daniel Bell’s work on the information society, which had a huge impact, which went well beyond what might have been expected in the immediate aftermath of his publication on The Coming of Post-Industrial Society. Nevertheless, indicators of impact will inevitably be added to all the other growing number of indicators, even thought universities will spend an unbelievable amount of time trying to document this metric.
In this environment, because I am a senior in academia, I sometimes get asked how a colleague should think about these metrics. Where should they publish? How many articles should they publish? Which publisher should they submit their book for publication? It goes on and on.
I try to give my opinion, but my most general response, when I feel like it will be accepted as advice and not criticism, is to focus on contributing something new to your field. Rather than think about numbers, think about making a contribution to how people think about your field.
This must go beyond the topic of one’s research. It is okay to know what topics or areas an academic works in, but what has he or she brought to that field? Is it a new way for doing research on a topic, a new concept for the area, or a new way of thinking about the topic?
In sum, if an academic’s career was considered, by another academic familiar with their work, could they say that the person had made an original, non-trivial contribution to the study of their field? This is very subjective and difficult to answer, which may be why administrators move to hard indicators. Presumably, if someone has made an important new contribution, their work will be published and cited more than someone who has not. That’s the theory.
However, the focus on contributing new ideas can give academics a more constructive motivation and an aim to guide their work. Rather than feeling that your future is based on getting x number of journal articles published, you make publication a means to a more useful end in itself, furthering progress in your field of study. If you accomplish this, the numbers, reputation, and visibility of your work will take care of themselves. What would be a new contribution to your field? That is exactly the right question.
The 6th ACM Web Science Conference will be held 23-26 June 2014 on the beautiful campus of Indiana University, Bloomington. Web Science continues to focus on the study of information networks, social communities, organizations, applications, and policies that shape and are shaped by the Web.
The WebSci14 program includes 29 paper presentations, 35 posters with lightening talks, a documentary, and keynotes by Dame Wendy Hall (U. of Southampton), JP Rangaswami (Salesforce.com), Laura DeNardis (American University) and Daniel Tunkelang (LinkedIn). Several workshops will be held in conjunction with the conference on topics such as Altmetrics, computational approaches to social modeling, the complex dynamics of the Web, the Web of scientific knowledge, interdisciplinary coups to calamities, Web Science education, Web observatories, and Cybercrime and Cyberwar. Conference attendees will have an opportunity to enjoy the exhibit Places & Spaces: Mapping Science, meant to inspire cross-disciplinary discussion on how to track and communicate human activity and scientific progress on a global scale. Finally, we will award prizes for the most innovative visualizations of Web data. For this data challenge, we are providing four large datasets that will remain publicly available to Web
I have agreed to co-chair the next Web Science Conference, Web Science 2014, which will be held in 2014 at Indiana University. The lead chairs are Fil Menczer and his group at Indiana University, and Jim Hendler at Rensselaer Polytechnic Institute, and one of the originators of the Semantic Web. The dates are 23-26 June 2014.
My mission is to help bring social scientists and humanities scholars to this conference to ensure that it is truly multi-disciplinary, and also to help encourage a more global set of participants, attracting academics from Europe but also worldwide.
For those who are not quite sure of the scope and methods of Web Science, let me recommend a chapter in my handbook by Kieron O’Hara and Wendy Hall, entitled ‘Web Science’, pp. 48-68 in Dutton, W. H. (2013) (ed.), The Oxford Handbook of Internet Studies. Oxford: Oxford University Press.The core of the Web Science community sometimes view this as a field or discipline on its own, while I would define it as a topic or focus within a broader, multdisciplinary field of Internet Studies.
In any case, I will be adding to this blog over the coming months as the conference planning progresses, but please consider participating. Information about the conference is posted at: http://websci14.org/#
Professor & Presidential Chair in Information Studies
University of California, Los Angeles
Oliver Smithies Visiting Fellow and Lecturer
Balliol College, University of Oxford
Scholars are expected to publish the results of their work in journals, books, and other venues. Now they are being asked to publish their data as well, which marks a fundamental transition in scholarly communication. Data are not shiny objects that are easily exchanged. Rather, they are fuzzy and poorly bounded entities. The enthusiasm for “big data” is obscuring the complexity and diversity of data and of data practices across the disciplines. Data flows are uneven – abundant in some areas and sparse in others, easily or rarely shared. Open access and open data are contested concepts that are often conflated. Data are a lens to observe the rapidly changing landscape of scholarly practice. This talk is based on an Oxford-based book project to open up the black box of “data,” peering inside to explore behavior, technology, and policy issues.