Douglas Carl Engelbart (1925-2013) is cited most prominently for his 1968 “Mother of All Demos”. He introduced his team’s research program by using of an early time-sharing computer system, what he called an “oN-Line System” (NLS), that used a “mouse” to support human interaction with the computer. For example, as you can see from his demonstration, he showed his audience how he could cut and paste words he highlighted with his mouse and how he could create a shopping list and so on. He was decades ahead of his time.
The mouse was one concrete invention that arose from his Augmented Human Intelligence Research Center – later called the Augmentation Research Center – based at the Stanford Research Institute (SRI), where he and a small group of colleagues began developing the NLS. This was in line with his focus on the use of computing to complement human intelligence, what he called “augmented intelligence” rather than artificial intelligence (AI). He credited Vannevar Bush for inspiring his vision, such as through Bush’s paper ‘As We May Think’. When he visited the Oxford Internet Institute (OII) in 2004, he told us about how a member of his team thought the device they were using looked like a mouse, with tail and all, so they used that term as a placeholder until they came up with a proper name for it. The mouse stuck.
Just as Engelbart was inspired by Bush, Engelbart inspired many others, such as Ted Nelson (1987), who coined the concept of hypertext and his visionary work on the Xanadu project. Ted was with us at the OII in 2004 and helped host Doug Engelbart’s visit. The concept of hypertext was clearly an influence on the development of the World Wide Web by Tim Berners-Lee and his colleagues at the European Laboratory for Particle Physics (CERN) in Switzerland. The Web’s development in the open innovation culture of CERN has been critical to countless other developments of the Internet and Web and related digital media (Dutton 2013: 9-10).
Many colleagues are beginning to document and archive the course of developments and their interrelationships in the short but incredible history of information and communication technologies (ICT) like the internet and web, such as the Engelbart Archive and the UK’s Archive for IT, as one I have only begun to follow more closely. It may be too early, but perhaps we can someday begin to track the course of innovations in Internet studies as well, as I began to describe in The Oxford Handbook of Internet Studies (Dutton 2013).
[The following commentary is authored by A. Michael Noll, and posted with the permission of the author. It illustrates the disagreement among experts on the social implications of new technologies, such as robotics, AI, cloud computing, and the Internet, demonstrating the value of continued research on the actual implications across different contexts and applications.]
The article “Rein In The Robots” by Kate Crawford (TIME, Vol.198, Nos.7-8, Aug.23-30, 2001, p.95) advocates “protection against the unchecked growth of artificial intelligence.” There is nothing new in her position. There have always been those who oppose any new technology or medium.
Artificial intelligence (AI) is decades old, and today has become a buzzword wrapping itself around such old concepts as computerization, pattern recognition, automation, robotics, and machine learning. It is hard to know what to fear when AI seems to encompass nearly everything.
Robots are machines. Robots do not have feelings, and thus it is tempting to attack them with headlines like “Rein In The Robots.” Actually, robots perform the heavy and tiresome work that humans are not equipped to perform. Robots clean floors tirelessly. Robots help the elderly overcome isolation. Robots entertain children. But they also scare us, such as the robot in the classic movie “The Day the Earth Stood Still.”
People fear what they do not understand. I made sure that my students understood the process of converting a signal to a digital representation. That way my students did not fear the digital revolution – they understood digitization and were not prey to all the hype. AI is a lot of hype – and that leads to fear and misunderstanding – and conspiracy theories.
Crawford mentions a “small, homogenous group of very wealthy people based in a handful of cities without any real accountability” as those driving the growth of AI. In expressing her conspiracy theory, she fails to accept that she, as an employee of Microsoft, is one of those people.
The real threat, in my opinion, is to computer security and our privacy from information stored centrally in computerized files – what today is called “the cloud.” Unknowingly, and willingly, all the information on our smart phones and computers is stored in the cloud, where it has the risk of being accessed and analyzed without our knowledge or approval and used against us by governments and others.
John R. Pierce (the father of communication satellites) liked to espouse that he was more concerned, not with artificial intelligence, but the natural stupidity of humans! Indeed, it is the latter we need to fear.
Way back in 1961, my article “Electronic Computer – Friend or Foe” expressed the dangers that might occur when “the computer is used to make logical decisions.” I suggested caution “before the axe and sledgehammer” becomes the only remedy. I guess little has changed – be ready to pull the plug!
August 16, 2021
A. Michael Noll is Professor Emeritus at USC Annenberg. John R. Pierce and he are the authors of SIGNALS: The Science of Telecommunication.
I am not an economist, but even I can see the huge flaws in a recently published “cost/benefit analysis of the UK’s online safety bill”. My immediate reactions:
The author, Sam Wood, of ‘The Economics of Online Harms Regulation’ in InterMEDIA, begins with an argument that the pandemic ‘[feuled] concerns about harmful content and behaviour encountered online’. Quite the contrary, I think it is arguable that the Internet and related online media became a lifeline for households in the UK and across the world during this period of lockdowns and working from home. The impetus for the online harms [now called ‘safety’] bill was fueled by the demonization of social media in the years before the pandemic. So, from the very introduction to this piece, I worried about the credibility of the economic analysis it promises.
I was not disappointed. It is jaw-dropping. Even enumerating some of the many online harms to be addressed and outlining some of the ‘challenges’ in quantifying them, the piece proceeds to do exactly that. Using Department of Digital, Culture, Media & Sports (DCMS) estimates, the author argues that the estimated costs of seven types of harm, the greatest being cyberstalking – at £2,176m over ten years beginning in 2023, is much greater than the costs of implementing the bill, the greatest cost being ‘content moderation’ estimated to be £1,700m over the same ten years in the future.
Pulling the costs of regulation out of a hat?
Of course, the costs of implementing this bill are not simply captured by the activities enumerated: awareness campaigns, creating reporting mechanisms, updating terms of service, producing risk assessments, content moderation, and transparency reports. What has the analysis left out?
Well, what about reductions in freedom of expression and commensurate reductions in the value and use of the Internet and related social media for the public good? This will be the major impact of the disproportionate incentivisation of censorship by tech platforms in order for platforms to avoid huge the potentially huge costs to be imposed by government regulators.
The duty of care solution is the problem that will have major negative impacts on what has been the lifeline for households, educators, healthcare professionals and all government departments at all times, but made so visible by the pandemic. The duty of care mechanism will incentivise censorship and surveillance of users and push the tech platforms to act like newspapers in performing ever stronger editorial roles, such as determining what is ‘disinformation’.
One could ask: Would not an economist need to list all the benefits of social media and related activities, and not just the costs?
This is not a neutral, critical analysis, but seems to be a political stitch up to support the proposed regulation. That said, such a flawed analysis might well make a better case for opposing than supporting this bill. Read it, consider its flaws, and oppose this misguided effort to address particular grievances by introducing a terrible policy. The proposed bill will do unmeasured damage to one of the most critical infrastructures available here and now for enabling enhanced means for communication and information for every age group in the UK and worldwide.
With apologies to the journal editors, but if the BBC or public service broadcasting were subject to such a flawed analysis, I am sure that InterMEDIA would not have even considered publishing such a piece. Then again, this bill seems to enjoy widespread support and specifically advertises its intent to protect freedom of expression. Yet how often do the intentions of regulation end up in failure and the unintended collateral damage overwhelm any positive outcomes. I’m afraid we are about to see this happen when this bill undermines an open and global Internet and free expression, and privacy is further eroded in order to enforce tech’s duty of care.
Progress on the pandemic is allowing the UK to talk about moving back to a new normal. In that spirit, may the UK apply a level of common sense and closer parliamentary and public scrutiny to the online safety bill – a level of care that such an important piece of media regulation would normally receive.
 Sam Wood, ‘The Economics of Online Harms Regulation’, InterMEDIA, 49(2): 31-34.
The COVID-19 pandemic has driven the Internet and related social media and digital technologies to the forefront of societies across the globe. Whether in supporting social distancing, working at home, or online courses, people are increasingly dependent on online media for everyday life and work. If you are teaching courses on the social aspects of the Internet, social media, and life or work in the digital age, you might want to consider a reader that covers many of the key technical and social issues.
Please take a look at the contents of the 2nd Edition of Society and the Internet (OUP 2019), which is available in paperback and electronic editions. Information about the book is available online here.
Whether you are considering readings for your Fall/Autumn courses, or simply have an interest in the many social issues surrounding digital media, you may find this book of value. From Manuel Castells’ Foreword to Vint Cerf’s concluding chapter, you find a diverse mix of contributions that show how students and faculty can study the social shaping and societal implications of digital media.
In addition to Manuel Castells and Vint Cerf along with the editors, our contributors include: Maria Bada, Grant Blank, Samantha Bradshaw, David Bray, Antonio A. Casilli, Sadie Creese, Matthew David, Laura DeNardis, Martin Dittus, Elizabeth Dubois, Laleah Fernandez, Sandra González-Bailón, Scott Hale, Eszter Hargittai, Philip N. Howard, Peter John, Silvia Majó Vázquez, Helen Margetts, Marina Micheli, Christopher Millard, Lisa Nakamura, Victoria Nash, Gina Neff, Eli Noam, Sanna Ojanperä, Julian Posada, Anabel Quan-Hasse, Jack Linchuan, Lee Raine, Bianca Reisdorf, Ralph Schroeder, Limor Shifman, Ruth Shillair, Greg Taylor, Hua Wang, Barry Wellman, and Renwen Zhang. Together, these authors offer one of the most useful and engaging collections on the social aspects of the Internet and related digital media available for teaching.
Thanks for your own work in this field, at an incredible period of time for Internet and new media studies of communication and technology.
News reports today citing one of the inventors of the Web, Sir Tim Berners-Lee, as arguing that the Web “is not working for women and girls”. Tim Berners-Lee is a hero of all of us involved in study and use of the Internet, Web, and related information and communication technologies. Clearly, many women and girls might well ‘experience violence online, including sexual harassment and, threatening messages’. This is a serious problem, but it should not be unnoticed that the Internet and Web have been remarkably gender neutral with respect to access.
In fact, women and girls access and use the Internet, Web and social media at about the same level as men and boys. There are some nations in which the use of the Internet and related ICTs is dramatically lower for women and girls, but in Britain, the US, and most high-income nations, digital divides are less related to gender than to such factors as income, education, and age. This speaks volumes about the value of these media to women and girls, and this should not be lost in focusing on problematic and sometimes harmful aspects of access to content on the Web and related media.
Below is one example of use of the Internet by gender in Britain in 2019, which shows that women are more likely to be next generation users (using three or more devices, one of which is mobile) and less likely to be a non-user:
The full report from which this drawn is available online here.
Professor Noam has focused attention on what seems like a benign and economically rational technical shift from linear TV to online video. Most people have some experience with streaming video services, for example. But the longer term prospects of this shift could be major (we haven’t seen anything yet) and have serious social implications that drive regulatory change, and also challenge those charged with managing the media. What is the next generation of digital television? Can it be managed? Are the principles of business management applicable to new digital organizations?
The Principal of Green Templeton College, Professor Denise Lievesley opened the session and introduced the speaker, and two discussants: Professor Mari Sako, from the Saïd Business School, and Damian Tambini, from the Department of Media and Communication at LSE, and a former director of Oxford’s Programme in Comparative Media Law and Policy (PCMLP). Following Eli Noam’s overview of several of the key themes developed in his books, and the responses of the discussants, the speakers fielded a strong set of questions from other participants. Overall, the talk and discussion focused less on the management issues, and more on the potential social implications of this shift and the concerns they raised.
The social implications are wide ranging, including a shift towards more individualized, active, emersive, and global media. There will be some of the ‘same old same old’, but also ‘much more’ that brings many perspectives on the future of television into households. The concerns raised by these shifts include threats to privacy and security to even shorter attention spans – can real life compete with sensational emersion in online video? Perhaps the central concern of the discussion focused around media concentration, and not only in cloud services, such as offered by the big tech companies, but also in national infrastructures, content, and devices.
This led to a discussion of the policy implications arising from such concerns, particularly in the aftermath of 2016 elections, mainly around the efforts to introduce governmental regulation of the global online companies and governmental pressures on platforms to censor their own content. This surfaced some debate over the cross-national and regional differences in approaches to freedom of expression and media regulation. While there were differences of opinion on the need and nature of greater regulation, there did seem to be little disagreement with Eli’s argument that many academics seem to have moved from being cheerleaders to fear mongering, when we should all seek to be ‘thought leaders’ in this space, given that academics should have the independence from government and the media, and an understanding informed by systematic research versus conventional wisdom across the world.
Eli is one of the world’s leading scholars on digital media and management, and his latest books demonstrate his command of this area. One of the speakers referred to his latest tome as an MBA in a box. The text has a version for undergraduate and graduate courses, but every serious university library should have them in their collection.
Eli Noam has been Professor of Economics and Finance at the Columbia Business School since 1976 and its Garrett Professor of Public Policy and Business Responsibility. He has been the Director of the Columbia Institute for Tele-Information, and one of the key advisors to the Oxford Internet Institute, having served on its Advisory Board since its founding in 2001 through the Institute’s first decade.
His new books on digital media and organizations have been praised by a range of digital and media luminaries, from Vint Cerf, one of the fathers of the Internet, to the former CEO of Time Warner, Gerald Levin and former CTO of HBO, Robert Zitter.
With the academic year fast approaching, we are hoping that the book will be useful for many courses around Internet studies, new media, and media and society. If you are teaching in this area, Mark and I hope you might consider this reader for your courses, and let your colleagues know about its availability. Authors of our chapters range from senior luminaries in our field, such as Professor Manuel Castels, who has written a brilliant foreword, to some promising graduate students.
Society and the Internet 2nd Edition.
How is society being reshaped by the continued diffusion and increasing centrality of the Internet in everyday life and work? Society and the Internet provides key readings for students, scholars, and those interested in understanding the interactions of the Internet and society. This multidisciplinary collection of theoretically and empirically anchored chapters addresses the big questions about one of the most significant technological transformations of this century, through a diversity of data, methods, theories, and approaches.
Drawing from a range of disciplinary perspectives, Internet research can address core questions about equality, voice, knowledge, participation, and power. By learning from the past and continuing to look toward the future, it can provide a better understanding of what the ever-changing configurations of technology and society mean, both for the everyday life of individuals and for the continued development of society at large.
This second edition presents new and original contributions examining the escalating concerns around social media, disinformation, big data, and privacy. Following a foreword by Manual Castells, the editors introduce some of the key issues in Internet Studies. The chapters then offer the latest research in five focused sections: The Internet in Everyday Life; Digital Rights and Human Rights; Networked Ideas, Politics, and Governance; Networked Businesses, Industries, and Economics; and Technological and Regulatory Histories and Futures. This book will be a valuable resource not only for students and researchers, but for anyone seeking a critical examination of the economic, social, and political factors shaping the Internet and its impact on society.
Vint Cerf is internationally recognized as “an Internet pioneer” – one of the “fathers of the Internet” – in light of his work with Bob Kahn in co-inventing Internet protocol (TCP/IP). He will be in East Lansing, Michigan, giving a Quello Lecture in celebration of the twentieth anniversary of the Quello Center. The Center was founded at MSU in 1998 to recognize the importance of James H. Quello’s contributions as one of the longest serving and most distinguished Commissioners of the Federal Communication Commission (FCC).
Arguably, over the first twenty years of the Quello Center’s existence, there has been no greater development shaping media and information technology, policy, and practice than the rise of the Internet and related information and communication technologies such as the Web, social media, and mobile Internet. But will the Internet play as central a role over the next twenty years?
To stimulate and inform debate around this question, we’ve asked Vint Cerf to provide his perspective on the Internet’s role in shaping media and information over the past twenty years, and in the coming decades. It is difficult to imagine another person who could provide such an authoritative perspective on twenty years in Internet time.
His lecture will be followed by questions and discussion as well as a reception. Join us on May 10thto celebrate and reflect on the most significant development shaping communication, media, and information over the life of the Quello Center, and also welcome Google’s Internet Evangelist to MSU.
Delighted to hear about the announcement of the award of support by the German Ministry for Education and Research for a German Internet Institute. It will be based in Berlin and be called the Internet Institute for the Networked Society or the Internet-Institut für die vernetzte Gesellschaft. The ministry has committed 50 million euros over five years, roughly based on a scheme similar to initial government funding for the Oxford Internet Institute (OII) at Oxford University by the UK government, which was matched by support from a major gift by Dame Stephanie Shirley.
The OII was founded in 2001 as the first department at a major university that was focused on multi-disciplinary studies of the Internet. It complemented Harvard’s Berkman Center, which was focused on law and policy in its early years. 2001 was a time at which the Internet was still dismissed by some academics as a fad. Since the OII’s founding, study of the Internet has been one of the most burgeoning fields in the social sciences (Dutton 2013). I am pleased to see that the name of the new institute suggests it will be, like the OII, firmly planted in the social sciences with many opportunities for collaboration across all relevant fields. Also I am pleased that the new institute appears to build on the The Alexander von Humboldt Institute for Internet and Society (HIIG), which spearheaded the development of a network of Internet research centers. Clearly, the new institute could make Berlin the center for Internet studies.
I am certain that many groups of academics competed for this grant, and that many will have been disappointed with the outcome. However, adding a major new center for Internet studies is going to lift all the growing numbers of centers and academics with a focus on the economic, societal and political shaping and implications of the Internet. And all of the scholars who put their efforts into competing proposals are likely to have many great ideas to continue pursuing.
So, my colleagues and I welcome the leaders and academics of the Internet Institute for the Networked Society to the world of Internet studies. The social and economic implications of the Internet are raising many technical, policy, and governance issues, from inequalities to fake news and more. Quite seriously, the world needs your institute along with many others to help shape responses to these issues in ways that ensure that the Internet continues to play a positive role in society.
I along with others are only now learning about this development. I look forward to hearing more in due course, and welcome any comments or corrections to this information – but too great to hold back.
Dutton, W. H. (2013, 2014), The Oxford Handbook of Internet Studies. Oxford University Press.
Fake News is a Wonderful Headline but Not a Reason to Panic
I feel guilty for not jumping on the ‘fake news’ bandwagon. It is one of the new new things in the aftermath of the 2016 Presidential election. And because purposively misleading news stories, like the Pope endorsing Donald Trump, engage so many people, and have such an intuitive appeal, I should be riding this bandwagon. It could be good for my own research area around Internet studies. But I can’t. We have been here before, and it may be useful to look back for some useful lessons learned from previous moral panics over the quality of information online.
Fake news typically uses catchy headlines to lure readers into a story that is made up to fit the interests of a particular actor or interest. Nearly all journalism tries to do the same, particularly as journalism is moving further towards embracing the advocacy of particular points of view, versus trying to present the facts of an event, such as a decision or accident. In the case of fake news, facts are often manufactured to fit the argument, so fact checking is often an aspect of identifying fake news. And if you can make up the facts, it is likely to be more interesting than the reality. This is one reason for the popularity of some fake news stories.
It should be clear that this phenomenon is not limited to the Internet. For example, the 1991 movie JFK captured far more of an audience than the Warren Commission Report on the assassination of President Kennedy. Grassy Knoll conspiracy theories were given more credibility by Oliver Stone than were the facts of the case, and needless to say, his movie was far more entertaining.
Problems with Responding
There are problems with attacking the problem of fake news.
First, except in the more egregious cases, it is often difficult to definitively know the facts of the case, not to mention what is ‘news’. Many fake news stories are focused on one or another conspiracy theory, and therefore hard to disprove. Take the flurry of misleading and contradictory information around the presence of Russian troops in eastern Ukraine, or over who was responsible for shooting down Malaysia Airlines Flight 17 in July of 2014 over a rebel controlled area of eastern Ukraine. In such cases in which there is a war on information, it is extremely difficult to immediately sort out the facts of the case. In the heat of election campaigns, it is also difficult. Imagine governments or Internet companies making these decisions in any liberal democratic nation.
Secondly, and more importantly, efforts to mitigate fake news inevitably move toward a regulatory model that would or could involve censorship. Pushing Internet companies, Internet service providers, and social media platforms, like Facebook, to become newspapers and edit and censor stories online would undermine all news, and the evolving democratic processes of news production and consumption, such as which are thriving online with the rise of new sources of reporting, from hyper-local news to global efforts to mine collective intelligence. The critics of fake news normally say they are not proposing censorship, but they rather consistently suggest that the Internet companies should act more like newspapers or broadcasters in authenticating and screening the news. Neither regulatory model is appropriate for the Internet, Web and social media.
Lessons from the Internet and Web’s Short History
But let’s look back. Not only is this not a new problem, it was a far greater problem in the past. (I’m not sure if I have any facts to back this up, but hear me out.)
Anyone who used the Internet and Web (invented in 1991) in the 1990s will recall that it was widely perceived as largely a huge pile of garbage. The challenge for a user was to find a useful artifact in this pile of trash. This was around the time when the World Wide Web was called the World Wide Wait, given the time it took to download a Web page. Given the challenges of finding good information in this huge garbage heap, users circulated urls (web addresses) of web pages that were worth reading.
A few key researchers developed what were called recommender sites, such as what Paul Resnick called Platforms for Internet Content Searches (PICS), which labeled sites to describe their content, such as ‘educational’ or ‘obscene’. PICS could be used to censor or filter content, but the promoters of PICS saw them primarily as a way to positively recommend rather than negatively censor content, such as that labeled ‘educational’ or ‘news’. Positive recommendations of what to read versus censorship of what a central provider determined not fit to be read.
Of course, organized lists of valuable web sites evolved into some of the earliest search engines, and very rapidly, some brilliant search engines were invented that we use effortlessly now to find whatever we want to know online, such as news about an election.
The rise of fake news moves many to think we need to censor or filter more content to keep people from being misinformed. Search engines try to do this by recommending the best sites related to what a person is searching for, such as by analysis of the search terms in relation to the words and images on a page of content.
Unfortunately, as search engines developed, so did efforts to game search engines, such as techniques for optimizing a site’s visibility on the Web. Without going into detail, there has been a continuing cat and mouse game between search engines and content providers in trying to outwit each other. Some early techniques to optimize a site, such as embedding popular search terms in the background of a site that are invisible to the reader but visible to a search engine, worked for a short time. But new techniques for gaming the search engines are likely to be matched by refinements in algorithms that penalize sites that try to game the system. Overtime, these refinements of search have reduced the prominence of fake and manufactured news sites, for example, in the results of search engines.
New Social Media News Feeds
But what can we do about fake news being circulated on social media, mainly social media platforms such as Facebook, but also email. The problems are largely focused here since social media news provision is relatively less public, and newer, and not as fully developed as more mature search engines. And email is even less public. These interpersonal social networks might pose the most difficult problems, and where fake news is likely to be less visible to the wider public, tech companies, and governments – we hope and expect. Assuming the search engines used by social media for the provision of news get better, some problems will be solved. Social media platforms are working on it. But the provision of information by users to other users is a complex problem for any oversight or regulation beyond self-regulation.
Professor Phil Howard’s brilliant research on computational propaganda at the Oxford Internet Institute (OII) develops some novel perspectives on the role of social media in spreading fake news stories faster and farther. His analysis of the problem seems right on target. The more we know about political bots and computational propaganda, the better prepared we are to identify it.
My concern is that many of the purported remedies to fake news are worse than the problem. They will lead straight to more centralized censorship, or to regulation of social media as if they were broadcast media, newspapers, or other traditional media. The traditional media each have different regulatory models, but none of them are well suited to the Internet. You cannot regulate social media as if they were broadcasters, think of the time spent by broadcast regulators considering one complaint by viewers. You cannot hold social media liable for stories, as if they were an edited newspaper. This would have a chilling effect on speech. And so on. Until we have a regulatory model purpose built for the Internet and social media, we need to look elsewhere to protect its democratic features.
In the case of email and social media, the equivalent of recommender sites are ways in which users might be supported in choosing with whom to communicate. Whom do you friend on Facebook? Whom do you follow on Twitter? Whose email do you accept, open, read, or believe? There are already some sites that detect problematic information. These could help individuals decide whether to trust particular sites or individuals. For example, I regularly receive email from people I know on the right, left and everywhere in between, and from the US and globally. As an academic, I enjoy seeing some, immediately delete others, and so forth. I find the opinions of others entertaining, informative and healthy, even though I accept very few as real hard news. I seldom if ever check or verify their posts, as I know some to be political rhetoric or propaganda and some to be well sourced. This is normally obvious on their face.
But I am trained as an academic and by nature, skeptical. So while it might sound like a limp squid, one of the only useful approaches that does not threaten the democratic value of social media and email, is the need to educate users about the need to critically assess information they are sent through email and by their friends and followers on social media. Choose your friends wisely, and that means not on the basis of agreement, but on the basis of trust. And do not have a blind faith in anything you read in a newspaper or online. Soon we will be just as amused by people saying they found a fake news story online as we have been by cartoons of someone finding a misspelled word on the Web.