The Co-Production of Knowledge: iCS Symposium, University of York, 18-20 July 2012: Call for Papers and Participation

Symposium  to  be  held  at   University  of  York,  UK   18-20 July  2012

Call  for  Papers:

The   ubiquitous   social   and   cultural   adoption   of   social   media,   such   as   Twitter,   Google,   Wikipedia,  YouTube  and  Facebook  can  be  seen  to  present  a  significant  example  of  scientific   and   technological   innovation   in   many   contemporary   societies.   While   some   studies   of   social   media   and,   more   specifically,   Web   2.0   platforms   built   around   user-­‐‑generated   content,   have   made   reference   to   the   importance   of   the   field   of   science   and   technology   studies   (STS)   for   understanding   their   development   and  diffusion,   scholars   working   within   this   academic   framework   have   yet   to   fully   turn   their   focus   on   this   area.   This   three-­‐‑day   symposium   is   intended   to   explore   the   intersection   between   STS   and   social   media  inquiry,  with  a  specific  focus  on  how  Web  2.0  is  both  generative  and  challenging  of  different  forms  of  knowledge  (co-­‐‑)production  and  the  authority  it  commands.
• The  user-­‐‑centred  and  mass-­‐‑collaboration  characteristics  of  social  media  platforms   have  a  clear  affinity  with  recent  STS  models  of  the  co-­‐‑construction  of   technologies.  Notions  such  as  ‘prosumerism’  have  been  used  to  describe  this   blurring  of  the  relationship  between  the  consumer  and  producer.  However,  we   need  to  ask  whether  this  is  to  be  seen  as  co-­‐‑construction  or  primarily  a  re-­‐‑ engineering  of  labour  relations  and  the  locus  of  production?  We  also  need  to  ask   whether  the  ubiquity  extends  across  all  social  media  for  all  types  of  content.  In   other  words,  are  new  forms  of  expertise  being  inscribed,  or  are  old  knowledge   hierarchies  being  reinforced?
• STS  challenges  the  traditional  perception  of  scientific  ‘discovery’  and   technological  advancement,  to  demonstrate  the  co-­‐‑production  of  claims  to   knowledge  and  the  different  forms  and  assemblages  of  knowledge  this  involves:   how  does  this  map  onto  commentaries  on  the  importance  of  lay  knowledge  and   ‘citizen  science’  found  in  Web  2.0  as  individuals  and  groups  distribute  ideas  and   information  across  their  social  networks?  Could  this  provide  a  new  impetus  for   ‘public  interest  science’?
• How  do  the  same  issues  relate  to  the  social  sciences  themselves:  how  might  Web   2.0  provide  opportunities  for  new  forms  of  data  and  data  analytics  (for  example,   as  ‘virtual  knowledge’  via  crowdsourcing,  real-­‐‑time  data  streaming,  by-­‐‑product
data  etc)  and  in  what  ways  do  these  challenge  conventional  social  science  by   opening  up  questions  about  what  data  itself  constitutes  and  what  order  of  being   it  represents?
• How  might  lay,  amateur  knowledge  be  mobilised  as  ‘citizen  science’  and  what   warrant,  authorisation  and  location  in  established  science  might  it  secure?  How   might  the  contribution  of  Web  2.0  science  platforms  differ  from  the  amateur   societies  of  the  19th  and  20th  centuries?
• It  has  been  claimed  that  algorithms  and  code  play  an  increasingly  powerful  part   in  shaping  and  constituting  everyday  life,  it  has  even  been  claimed  that   algorithms  are  creating  new  rules  and  power  structures  that  unknowingly  come   to  restructure  social  hierarchies  and  divisions.  How,  for  example,  do  algorithms   make  decisions  for  us?  How  do  algorithms  bypass  or  re-­‐‑craft  human  agency?   What  are  the  implications  of  this?  Exactly  how  do  algorithms,  code  and  metrics   shape  everyday  life  and  access  to  knowledge?
• Do  the  open  source  platforms  and  social  media  tools  of  Web  2.0  come  into   tension  with  the  international  standardisation  and  codification  of  global  ICT   infrastructures  and  local  and  global  knowledge  infrastructures?
• Finally,  the  more  celebratory  characterisations  of  social  media  emanating  from   the  marketing  world  typically  lack  a  critical  focus:  can  social  media  and  STS   analyses  build  a  political  economy  of  Web  2.0  to  provide  such  a  focus,  by   explicitly  addressing  issues  of  participatory  surveillance,  exclusion  and  control?
Papers  are  invited  that  explore  these  broad  questions  around  a  number  of  possible   themes,  including:

• The  boundaries  and  future  of  social  media  as  a  medium  of  knowledge  creation,   dissemination,  and  regulation
• The  co-­‐‑production  of  knowledge  via  Web  2.0  platforms   • Knowledge,  expertise  and  disruptive/disrupted  authority   • Capturing  social  media:  the  commercial/political  exploitation  by  or  empowering
of  Web  2.0   • Ownership,  dissemination  and  use  of  scientific  knowledge   • E-­‐‑governance  and  the  regulation  of  knowledge  within  social  media     • National  practices  and  global  opportunities   • Novel  forms  of  knowledge  creation  through  group  processes, archiving,  digitization  etc.   • Public  and  visible  science
Confirmed  plenary  speakers  include: Geof  Bowker,  University  of  Pittsburgh;  Leah  Lievrouw,  UCLA;   Adrian  MacKenzie,  Cesagen,  University  of  Lancaster;  Rob  Proctor,  e-­‐‑Research  Centre,  University  of  Manchester;  Robin  Williams,  ISSTI,  Edinburgh;  Sally  Wyatt,  e-­Humanities  Programme,  Royal  Netherlands  Academy  of  Arts  and   Sciences.

This  conference  is  intended  to  bring  together  some  of  the  leading  scholars  in  the  fields  of   STS,  Communication  and  Social  Media  analysis,  and  the  history  and  philosophy  of   science  to  critically  explore  these  issues.

Please  send  abstracts  of  proposed  papers  to  sarah-­‐‑shrive-­‐‑  by  29   February  2012      Registration  information  is  available  on  the  SATSU  site:

Conference  organising  committee:  David  Beer,  Darren  Reed,  Mike  Hardey,  Brian  Loader,   Sarah  Shrive-­Morrison,  Andrew  Webster,  Robin  Williams,  Sally  Wyatt

The  deadline  for  this  call  for  papers  is  29  February  2012.  If  you  are  interested  to  submit   an  individual  paper  or  panel  including  3  papers  please  go  to  web-­‐‑link  or  contact  email

Conference  Fees   The  ICS  conference  is  completely  funded  through  self-­finance.  iCS  therefore  needs  to   charge  a  conference  fee  applicable  to  all  participating  in  this  conference,  including   speakers.  However,  all  panel  organisers,  speakers  and  moderators  will  receive  a  £25   discount  on  the  conference  fee.  The  conference  fee  covers  the  administration  and   production  of  the  conference,  hire  of  venue  and  a/v  equipment,  and  the  catering  costs.   The  estimated  conference  fees  for  this  coming  year  are:  Full  fee  between  £100-­150;   Concessions  between  £75-­£125;  Day  fee  between  £75-‑125  (all  fees  to  include  lunch).

Digital Literacy and Self-Regulation Online: Insights for Policy: Event on Friday, 18 November 2011, University of Leicester, UK

ESRC Seminar Series: ‘Digital Policy: Connectivity, Creativity and Rights’ (RES-451-26-0849) 2011-13

‘Digital Literacy and Self-Regulation Online: Insights for Policy’
Friday November 18 2011, University of Leicester, UK

(Hosted by the Department of Media and Communication)

This seminar explores different understandings and roles of digital literacy and issues of online self-regulation. It works against the background of shifts towards individualization in the digital economy and the implications for policy. It approaches policy in the broad sense recognizing the role of varied stakeholders including nongovernmental actors and organizations and the importance of informal as well as formal processes. It considers the nature of online technologies and access and their fast changing nature and the impacts on regulatory environments, and specific contexts within which regulation can and should take place.

The seminar will address a range of issues related to digital literacy – what it is, where it should be developed, who should be responsible for it? How and what kinds of organizations and processes are relevant to it now, as well as what kinds of developments should there be in the future? Other questions will include: what does online safety mean and what are its key components; is there too much emphasis on technical rather than informational literacy; how do market drivers affect self-regulation; what are the generational issues that need to be addressed? The seminar will examine the nature of self-regulation online including in relation to the broader regulatory environment and other actors engaged with it.

Confirmed speakers include: Brian Simpson (University of New England, Australia), Peter Lunt (University of Leicester), Gillian Youngs (University of Wales, Newport), Dr Martin L Poulter (Wikipedia) and Josie Fraser (Social & Educational Technologist and consultant)

Call for Papers

We still have room for more papers and would welcome proposals from PhD students, academics and media and other practitioners and policymakers working in this area.


We have a limited number of places for the seminar so would like to hear from anyone who would like to take part as soon as possible. There is no charge for attending and lunch will be provided. We can meet UK travel costs (standard rail fare) for speakers and PhD students. Contact

Paper proposals and requests to participate should be sent to Tracy Simmons ( who is organizing this seminar as soon as possible. The seminar series is led by Gillian Youngs (University of Wales, Newport) in collaboration with Tracy Simmons (University of Leicester), William Dutton (Oxford Internet Institute) and Katharine Sarikakis (University of Vienna). Weblink for seminar series:

YouTube clip

Moving Content Control Closer to the Household: Who is doing the research?

News of the launch of ParentPort should be of interest to all following communication issues, as it aims to provide an integrated, single site, to help households complain about content or material they feel is inappropriate for children, such as by helping to direct them to the appropriate regulator. This complements initiatives by the largest ISPs in Britain to provide new customers with the ability to have access to software for filtering content, and blocking content deemed inappropriate to children. Some provide software for PCs, others control at the ISP level. An overview of these initiatives is online here.

These are early days in the development of such facilities, but they seem to be the most responsible response to increasing demands for content regulation. The closer decisions can be moved to the user and the household, the more appropriate the are the controls from most perspectives on the rights of Internet users. Enabling more effective self-regulation by users and households might take some pressure off policy-makers and regulators to apply Internet filtering regimes. Earlier efforts have not been a great success, such as the US Violence-Chip or V-Chip, during President Clinton’s administration. However, these initiatives deserve support and research to determine how they can be good enough to head off far blunter approaches that take control away from users and households.

I am not aware of research on these measures, but would encourage it and would be delighted to hear from any experts and researchers focusing on this area. The OII is doing some work on the home hub, in a study of future home networks and services, which is a promising locus for content controls in the future, and I would be particularly interested in any related work with this focus.



A Decade in Internet Time: OII-iCS Open Plenary Session on 22 September 2011 at 4.30pm at Said Business School

A Decade in Internet Time: OII-iCS Open Plenary Session

in celebration of the Oxford Internet Institute’s tenth anniversary

Thursday 22 September 2011 16:30 – 18:30

Location: Nelson Mandela Lecture Theatre, Said Business School, Park End Street, Oxford OX1 1HP.

This public plenary panel is the centrepiece of the iCS-OII Symposium on A Decade in Internet Time, and the 10th Anniversary celebration of the OII. A distinguished panel has been asked to reflect on the defining developments of the past ten years and the key challenges and opportunities that the next decade may bring. We hope this special session will stimulate and inform debate over the future of the Internet and our field.

Chair: Bill Dutton, Professor of Internet Studies, Oxford Internet Institute


Manuel Castells, Wallis Annenberg Chair in Communication, Technology and Society, University of Southern California, ‘Internet and the Network Society’

Vint Cerf, Chief Internet Evangelist, Google, ‘ Everything is Connected to Everything’

Andrew Graham, Master of Balliol College, University of Oxford, ‘The Internet: Looking Back and Looking Forward’

Wendy Hall, DBE, FRS, FREng is Professor of Computer Science at the University of Southampton, UK, and Dean of the Faculty of Physical and Applied Sciences, ‘A Web Wise World’

Professor Eli Noam, Columbia Institute for Tele-Information, Columbia University, ‘Next-Generation Policy Research for Next-Generation Internets’

Dame Stephanie Shirley BSc, CEng, FREng, The Shirley Foundation

18:30 – 19:30 Wine reception (Lobby, Said Business School)

Oxford Union Debate on Informal Learning

I participated in a debate at the Oxford Union last year on the significance of informal learning. I argued that informal learning is a critical resource that is being utilized by networked individuals, and that networked institutions, like universities, need to understand how to capture the value of these informal practices. A nice summary and edited video of the debate is available online, and published in eLearn Magazine.

Participants in Debate

Michael Nielsen speaking at the OII on Open Science

‘Doing Science in the Open’ a talk by Michael Nielsen

OII, 1 St Giles’, Oxford from 12-13.00 on 8 September 2011

Michael has written: “I’ll start this talk by describing the Polymath Project, an ongoing experiment in “massively collaborative” mathematical problem solving. The idea is to use online tools — things like blogs and wikis — to collaboratively attack difficult mathematical problems.  By combining the best ideas of many minds from all over the world, the Polymath Project has made breakthroughs on important mathematical problems.

What makes this an exciting story is that it’s about much more than just solving some mathematical problems.  Rather, the story suggests that online tools can be used to transform the way we humans work together to make scientific discoveries.  We can use online tools to
amplify our collective intelligence, in much the same way as for millenia we’ve used physical tools to amplify our strength.  This has the potential to accelerate scientific discovery across all disciplines.

Michael Nielsen

This is an optimistic story, but there’s a major catch.  Scientists have for the most part been extremely extremely conservative in how they use the net, often using it for little more than email and passive web browsing.  Projects like Polymath are the exception not the rule.  I’ll discuss why this conservatism is so common, why it’s so damaging, and how we can move to a more open scientific culture.”

Background reading:

The Future of Science

The talk is based on the book “Reinventing Discovery“, to be published by Princeton University Press on October 21, 2011.

Michael Nielsen is an author and an advocate of open science.  His book about open science, Reinventing Discovery, will be published by Princeton University Press in October, 2011.  Prior to his book, Michael was an internationally known scientist who helped pioneer the field of quantum computation.  He co-authored the standard text in the field, and wrote more than 50 scientific papers, including invited contributions to Nature and Scientific American.  His work on quantum teleportation was recognized in Science Magazine’s list of the Top Ten Breakthroughs of 1998. Michael was educated at the University of Queensland, and as a Fulbright Scholar at the University of New Mexico. He worked at Los Alamos National Laboratory, as the Richard Chace Tolman Prize Fellow at Caltech, was Foundation Professor of Quantum Information Science and a Federation Fellow at the University of Queensland, and a Senior Faculty Member at the Perimeter Institute for Theoretical Physics. In 2008, he gave up his tenured position to work fulltime on open science.

Next Generation Research and the Oxford e-Social Science Project

May I draw your attention to a recent article in the Journal of Information Technology that presents a framework I’ve developed for conceptualising the social and technical choices shaping the next generation of research:

If you would like an offprint please contact giving your name and postal address.

The paper draws on research undertaken over the last five years in the Oxford e-Social Science project (OeSS), which was central to our edited book, World Wide Research.  The project aims to understand how e-Research projects negotiate various social, ethical, legal and organizational forces and constraints, in order to help researchers avoid these problems when building scientific collaborations and tools for research.

World Wide Research

Hold the date: We will be holding a number of events in the coming months drawing on the research of OeSS, which may be of interest to you. Further details to follow:

8 September in Oxford: Michael Nielsen on his forthcoming book with Princeton University Press, entitled Reinventing Discovery: The New Era of Networked Science.

24 November in Oxford or London: a showcase event highlighting some of the conclusions of the OeSS project that can inform and stimulate debate over the ethical, legal and institutional implications for the future of digital research across all disciplines.



William Dutton, Professor of Internet Studies

You can access my papers on the Social Science Research Network (SSRN) at:


Selected Responses to Jeremy Hunt’s Open Letter

I worked with several colleagues at the OII (Victoria Nash, Monica Bulger, and Alissa Cooper) to pen responses to Jeremy Hunt’s Open Letter, requesting feedback of relevance to the new communications bill. They were submitted under my name as director of the OII, but also as a Co-Principal Investigator of the ESRC Seminar Series, entitled ‘Digital Policy’. In fact, all of these responses were shaped to some degree by discussions that took place at the OII Forum, entitled ‘Digital Policy Issues of the New Communications Bill’, held at the OII on 24 June 2011. A summary of that forum will be distributed in due course. In the meantime, these responses provide some sense of what my colleagues and I took away from the forum.

Question 1

What could a healthier communications market look like? How can the right balance be achieved between investment, competition and services in a changing technological environment?

Many of the questions in this review focus on aspects of competition and industrial policy, however it is our view that for the economic benefits of the Internet to be maximised, attention must also be devoted to closing the digital divide. Efforts such as Race Online 2012 demonstrate that the UK government realizes the significance of access to the Internet in supporting efforts to erase the digital divide, increase participation and enhance digital media literacy. Yet less than 30 percent of adults in the UK report receiving training in media literacy, even though training could promote participation among those with little to no experience (Ofcom, 2011; Livingstone & Wang, 2011). Our view is that access must be paired with understanding of options and risks to promote a healthier communications market.  Based on our 2011 OxIS survey findings, 73 percent of individuals in the UK use the Internet, leaving more than a quarter of the population off the Internet.  Efforts to increase Internet use among Britons has critical significance for 21st century economic and civic participation, but need adequate resources to promote understanding of the associated opportunities and risks.

For earlier OxIS figures see:

Dutton, W. H., Helsper, H. J., and Gerber, M. M. (2009), The Internet in Britain. Oxford: Oxford Internet Institute, University of Oxford.

Livingstone, S. & Wang, Y. (20110) Media Literacy and the Communications Act. London: LSE.

Ofcom (2011b). UK adults’ media literacy. London: Ofcom.

Question 3

Is regulatory convergence across different platforms desirable and, if so, what are the potential issues to implementation?

This question was discussed at a recent policy forum convened by the Oxford Internet Institute, in which field-leading academics with media, communications and regulatory expertise were asked to consider the proposed Review of the Communications Act. This forum served to reinforce our view that it would be a significant mistake to seek regulatory convergence across platforms if this means imposing a model of broadcast regulation on the Internet. It is often assumed that the Internet is a modern era ‘Wild West’, lawless and unregulated. In fact, the opposite is true – there is already extensive regulation of Internet service provision, content and activities. We would argue that traditional regulatory models for broadcasting, common carriers (such as post or telecommunications) and the press cannot be imposed wholesale on the Internet without serious risks to its vitality and its contribution to the UK economy as well as potential chilling effects of speech. Further analysis of this point can be found in: Dutton, W. H. (2010b), ‘Aiming at Copyright Infringers and Hitting the Digital Economy’, Prometheus, Vol. 28, No. 4, pp. 385-388, December 2010. Available at SSRN:

Question 13

Where has self- and co-regulation worked successfully and what can be learnt from specific approaches? Where specific approaches haven’t worked, how can the framework of content regulation be made sufficiently coherent and not create barriers to growth, but at the same time protect citizens and enable consumer confidence?

Many different regulatory models have been applied to various aspects of the Internet. Mobile operators in the UK voluntarily adopted industry codes of conduct to limit Internet access to adult content to minors, and to limit the use of location-aware services. Similarly the UK-licensed Internet gambling industry has proved that age verification (at least for the 18 threshold) is possible, and further has been widely recognised to have implemented this so successfully that even the child protection lobby have registered their satisfaction with this system. The UK model for control of illegal content, such as child pornography and hate speech, could undoubtedly benefit from more transparency and judicial oversight, but has broadly proved an effective way to limit the distribution of such material. Such measures are almost all co-regulatory – individual businesses and industry bodies signing up to common codes of conduct or unofficial norms, with the backing (or threat) of legislation.

We do not believe that the Internet requires further heavy-handed regulation, and would propose two principles as a suitable basis for advance:

·       A presumption in favour of ‘democratised regulation’, namely pushing more control to the users and producers of communication and information services – the public. This is not simply another term for self-regulation, as it requires regulatory support at many levels (see below). A good example of democratised regulation would be the currently evolving system for content regulation whereby only extremely limited forms of illegal content (such as child pornography) might be blocked by mandate or on a centralized basis, with users having access to PC-based tools, a ‘home hub,’ or an ISP filtering system that enables them to choose how much content (if any) they want filtered. In this sense, parents, educators and users generally, could be given more control over their own communications infrastructure in a way that is low cost for government and industry.

·       A presumption in favour of regulation only where it is needed to ensure the preservation of a fair, accessible and open Internet, or to protect the most fundamental rights such as freedom of speech or protection from abuse.

I would also like to draw your attention to related post by Roger Darlington at Roger has been posting links to other submissions here:

Roger Darlington’s Website:

David Grahams’ Blog:

Converging Technologies, Divergent Cultures

The FT published an interesting comment by Vittorio Colao, the Chief Executive of Vodafone, which essentially argued that the French President, Nicholas Sarkozy, was right to argue for stronger regulation of the Internet (FT 6 June 2011). Mr Colao’s view nicely illustrates the degree that real convergence of media must be based on more than simple technical convergence. Technically, the mobile phone and the Internet are increasingly converging on common and hybrid infrastructures. However, the mobile phone industry is anchored in a very different culture – the same culture that has fostered so-called ‘walled gardens’, which have only recently begun to be lowered. Surely the mobile industry has evidence that most users do not wish to be walled in by the providers. Whomever is right, in the longer-run, this cultural split between that of the closed mobile phone and the open Internet industries is a major obstacle to real convergence.  I can’t see the world returning to walled gardens, but that might be my own wishful thinking.

An excellent discussion of the many dimensions of convergence was written years ago by Nicholas Garnham, see: N. Garnham, ‘Constraints on Multimedia Convergence’, pp. 103-19 in Dutton, W. H. (1996, reprinted 2001), Information and Communication Technologies – Visions and Realities (Oxford: Oxford University Press).


Addicted to the Internet?

Stop complaining about how you can’t get away from e-mail, the Web or social networking – that the Internet is undermining your productivity: Disconnect yourself! Of course you can always choose not to use the Internet, but now you can disconnect yourself with the aid of an app for up to eight hours at a time. Its called freedom by the developers. See:

There are serious issues around notions of ‘Internet addiction’ and we have studied this at the OII. However, the idea of addiction to technologies like the Internet, and disconnecting yourself from these devices, is not new. My students and I studied the impact of a pager blackout in the late 1990s, when journalists across the US argued that the pager blackout was like a ‘snow day’, freeing people from the demands of everyday life.* Our research found that this was true for middle-class managers and professionals, such as doctors and journalists, but far less the case for those more marginal, such as the unemployed, women v men, minorities, and those who depended on the pager for their next job, such as construction workers. To them, the pager was more central to their connections with family, friends, and employment. The pager freed them from remaining by a phone, for example, just as the Internet can free an individual from being where they need to be to get information or connect with a friend or associate.

We are hoping to study Internet addiction and other risks tied to Internet use, but it is important to note that these impacts are likely to be socially distributed in quite meaningful ways. Technologies seldom fit into everyone’s life in the same way. In many respects, that is the nice aspect of this new app – anyone complaining about the Internet undermining their productivity no longer has a real excuse.


*Dutton, W. H., Elberse, A., Hong, T., and Matei, S. (2001), ‘Beepless in America: The Social Impact of the Galaxy IV Pager Blackout,’ pp. 9-32 in S. Lax (ed.), Access Denied in the Information Age, London: Macmillan.