Twitter Foreign Policy and the Rise of Digital Diplomacy

Recent Chinese concerns over ‘Twitter Foreign Policy” are just the tip of the iceberg on the ways in which the Internet has been enabling diplomacy to be reconfigured, for better or worse. Over a decade ago, Richard Grant, a diplomat from New Zealand, addressed these issues in a paper I helped him with at the OII.[1] Drawing from Richard’s paper, there are at least five ways in which the Internet and social media are reconfiguring diplomacy:

  1. Changing who participates in diplomacy, creating a degree of openness and transparency, for example through leaks and whistleblowers like Edward Snowden, that puts diplomacy in the public eye, establishing an entire field of “public diplomacy”;
  2. Creating new sources of information for diplomacy, such as when mobile Internet videos become key to what is known about an event of international significance;
  3. Speeding up diplomatic processes in response to the immediacy of news about events in the online world that require more rapid responses in order to be more effective, such as in challenging misinformation;
  4. Pushing diplomacy to be more event-led, when the world knows about events that diplomats cannot ignore; and
  5. Eroding borders, such as enabling diplomats to communicate locally or globally from anywhere at any time.  th-1

These transformations do not diminish the need for diplomats to serve a critical role as intermediaries. If anything, the Internet makes it possible for diplomats to be where they need to be to facilitate face-to-face interpersonal communication, making the geography of diplomacy more, rather than less, important. However, it poses serious challenges for adapting diplomacy to a globally digital village, such as how to adapt hierarchical bureaucracies of diplomacy to respond to more agile networks, and how to best ‘join the conversation’ on social media.

[1] Richard Grant (2004), “The Democratization of Diplomacy: Negotiating with the Internet,” OII Research Report No. 5. Oxford, UK: Oxford Internet Institute, University of Oxford. See http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1325241  Also discussed in a talk I gave last year on Mexico in the New Internet World, see: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2788392

Email Disrupting Life at Home?

Email Disrupting Life at Home? Careful What You Ask For

In France and other nations there is discussion of somehow banning email after 6pm or outside of working hours. For example, see here. Perhaps this could help provide a better work-life balance or prevent households from competing with email for the attention of their family. But this raises not only problems of implementation, but also the reverse – shall we start policing the personal use of communication and information technologies like email in the office?

email-management

Implementation would be impossible. You could get email at home or outside of work hours, but also work related Tweets, texts, messages, calls, video calls, WeChats, social media posts, and more. Email is only one avenue into the household, and declining in use relative other social media and other new media. Implementation would also be problematic by what would be a regulatory overreach, with public regulation reaching into the use of media in the households and private companies and NGOs, etc.

But the greatest threat is that this will go both ways. Companies, government departments, NGOs and others will want their employees and managers to stop using electronic media for personal reasons while at work, or during the work day, such as checking on your children, or making reservations, or getting any personal emails.

The first dissertation I supervised on corporate email was in 1980 and one of the key issues in these early days when email was beginning to be used in business instead of telegrams or faxes, was a worry that employees would use email for personal reasons that had nothing to do with work. My response then and now has always been that this should not be a worry. Personal uses of email at work are helpful for the morale and time management of people in the workplace, and – it goes both ways – email will enable employees to handle some business at home. And especially in the early days of email, personal use helped bring business people online, as then and now, many resist the use of online media for business purposes. There is a positive synergy (sorry to use that word) between the use of communication technologies at home and at work – a win-win.

Encourage and teach individuals to manage their time and self-regulate their engagement with work from home and vice versa, but don’t try to regulate something for which no one size fits all.

BBC news coverage: http://www.bbc.com/news/magazine-26958079

Don’t Panic over Fake News

Fake News is a Wonderful Headline but Not a Reason to Panic

I feel guilty for not jumping on the ‘fake news’ bandwagon. It is one of the new new things in the aftermath of the 2016 Presidential election. And because purposively misleading news stories, like the Pope endorsing Donald Trump, engage so many people, and have such an intuitive appeal, I should be riding this bandwagon.[1] It could be good for my own research area around Internet studies. But I can’t. We have been here before, and it may be useful to look back for some useful lessons learned from previous moral panics over the quality of information online. th-1

Fake News

Fake news typically uses catchy headlines to lure readers into a story that is made up to fit the interests of a particular actor or interest. Nearly all journalism tries to do the same, particularly as journalism is moving further towards embracing the advocacy of particular points of view, versus trying to present the facts of an event, such as a decision or accident. In the case of fake news, facts are often manufactured to fit the argument, so fact checking is often an aspect of identifying fake news. And if you can make up the facts, it is likely to be more interesting than the reality. This is one reason for the popularity of some fake news stories.

It should be clear that this phenomenon is not limited to the Internet. For example, the 1991 movie JFK captured far more of an audience than the Warren Commission Report on the assassination of President Kennedy. Grassy Knoll conspiracy theories were given more credibility by Oliver Stone than were the facts of the case, and needless to say, his movie was far more entertaining.

Problems with Responding

There are problems with attacking the problem of fake news.

First, except in the more egregious cases, it is often difficult to definitively know the facts of the case, not to mention what is ‘news’. Many fake news stories are focused on one or another conspiracy theory, and therefore hard to disprove. Take the flurry of misleading and contradictory information around the presence of Russian troops in eastern Ukraine, or over who was responsible for shooting down Malaysia Airlines Flight 17 in July of 2014 over a rebel controlled area of eastern Ukraine. In such cases in which there is a war on information, it is extremely difficult to immediately sort out the facts of the case. In the heat of election campaigns, it is also difficult. Imagine governments or Internet companies making these decisions in any liberal democratic nation.

Secondly, and more importantly, efforts to mitigate fake news inevitably move toward a regulatory model that would or could involve censorship. Pushing Internet companies, Internet service providers, and social media platforms, like Facebook, to become newspapers and edit and censor stories online would undermine all news, and the evolving democratic processes of news production and consumption, such as which are thriving online with the rise of new sources of reporting, from hyper-local news to global efforts to mine collective intelligence. The critics of fake news normally say they are not proposing censorship, but they rather consistently suggest that the Internet companies should act more like newspapers or broadcasters in authenticating and screening the news. Neither regulatory model is appropriate for the Internet, Web and social media.

Lessons from the Internet and Web’s Short History

But let’s look back. Not only is this not a new problem, it was a far greater problem in the past. (I’m not sure if I have any facts to back this up, but hear me out.)

Anyone who used the Internet and Web (invented in 1991) in the 1990s will recall that it was widely perceived as largely a huge pile of garbage. The challenge for a user was to find a useful artifact in this pile of trash. This was around the time when the World Wide Web was called the World Wide Wait, given the time it took to download a Web page. Given the challenges of finding good information in this huge garbage heap, users circulated urls (web addresses) of web pages that were worth reading.

A few key researchers developed what were called recommender sites, such as what Paul Resnick called Platforms for Internet Content Searches (PICS), which labeled sites to describe their content, such as ‘educational’ or ‘obscene’.[2] PICS could be used to censor or filter content, but the promoters of PICS saw them primarily as a way to positively recommend rather than negatively censor content, such as that labeled ‘educational’ or ‘news’. Positive recommendations of what to read versus censorship of what a central provider determined not fit to be read.

Of course, organized lists of valuable web sites evolved into some of the earliest search engines, and very rapidly, some brilliant search engines were invented that we use effortlessly now to find whatever we want to know online, such as news about an election.

The rise of fake news moves many to think we need to censor or filter more content to keep people from being misinformed. Search engines try to do this by recommending the best sites related to what a person is searching for, such as by analysis of the search terms in relation to the words and images on a page of content.

Unfortunately, as search engines developed, so did efforts to game search engines, such as techniques for optimizing a site’s visibility on the Web. Without going into detail, there has been a continuing cat and mouse game between search engines and content providers in trying to outwit each other. Some early techniques to optimize a site, such as embedding popular search terms in the background of a site that are invisible to the reader but visible to a search engine, worked for a short time. But new techniques for gaming the search engines are likely to be matched by refinements in algorithms that penalize sites that try to game the system. Overtime, these refinements of search have reduced the prominence of fake and manufactured news sites, for example, in the results of search engines.

New Social Media News Feeds

But what can we do about fake news being circulated on social media, mainly social media platforms such as Facebook, but also email. The problems are largely focused here since social media news provision is relatively less public, and newer, and not as fully developed as more mature search engines. And email is even less public. These interpersonal social networks might pose the most difficult problems, and where fake news is likely to be less visible to the wider public, tech companies, and governments – we hope and expect. Assuming the search engines used by social media for the provision of news get better, some problems will be solved. Social media platforms are working on it.[3] But the provision of information by users to other users is a complex problem for any oversight or regulation beyond self-regulation.

Professor Phil Howard’s brilliant research on computational propaganda at the Oxford Internet Institute (OII) develops some novel perspectives on the role of social media in spreading fake news stories faster and farther.[4] His analysis of the problem seems right on target. The more we know about political bots and computational propaganda, the better prepared we are to identify it.

The Risks

My concern is that many of the purported remedies to fake news are worse than the problem. They will lead straight to more centralized censorship, or to regulation of social media as if they were broadcast media, newspapers, or other traditional media. The traditional media each have different regulatory models, but none of them are well suited to the Internet. You cannot regulate social media as if they were broadcasters, think of the time spent by broadcast regulators considering one complaint by viewers. You cannot hold social media liable for stories, as if they were an edited newspaper. This would have a chilling effect on speech. And so on. Until we have a regulatory model purpose built for the Internet and social media, we need to look elsewhere to protect its democratic features.

In the case of email and social media, the equivalent of recommender sites are ways in which users might be supported in choosing with whom to communicate. Whom do you friend on Facebook? Whom do you follow on Twitter? Whose email do you accept, open, read, or believe? There are already some sites that detect problematic information.[5] These could help individuals decide whether to trust particular sites or individuals. For example, I regularly receive email from people I know on the right, left and everywhere in between, and from the US and globally. As an academic, I enjoy seeing some, immediately delete others, and so forth. I find the opinions of others entertaining, informative and healthy, even though I accept very few as real hard news. I seldom if ever check or verify their posts, as I know some to be political rhetoric or propaganda and some to be well sourced. This is normally obvious on their face.

But I am trained as an academic and by nature, skeptical. So while it might sound like a limp squid, one of the only useful approaches that does not threaten the democratic value of social media and email, is the need to educate users about the need to critically assess information they are sent through email and by their friends and followers on social media. Choose your friends wisely, and that means not on the basis of agreement, but on the basis of trust. And do not have a blind faith in anything you read in a newspaper or online. Soon we will be just as amused by people saying they found a fake news story online as we have been by cartoons of someone finding a misspelled word on the Web.

Notes

[1] List of Fake News Sites: http://nymag.com/selectall/2016/11/fake-facebook-news-sites-to-avoid.html

[2] Resnick, P., and Miller, J. (1996), ‘PICS: Internet Access Controls without Censorship’, Communications of the ACM, 39(10): 87-93.

[3] Walters, R. (2016), ‘Mark Zuckerberg responds to criticism over fake news on Facebook, Financial Times: https://www.ft.com/content/80aacd2e-ae79-11e6-a37c-f4a01f1b0fa1?sectionid=home

[4] Phil Howard: https://www.oii.ox.ac.uk/is-social-media-killing-democracy/

[5] B.S. Detector: http://lifehacker.com/b-s-detector-lets-you-know-when-youre-reading-a-fake-n-1789084038

 

10th Anniversary of OII’s DPhil in Information, Communication & the Social Sciences

It was a real honour today to speak with some of the alumni (a new word for Oxford) of the Oxford Internet Institute’s DPhil programme. A number came together to celebrate the 10th anniversary of the DPhil. It began four seemingly long years after I became the OII’s founding director in 2002. So while I have retired from Oxford, it was wonderful to return virtually to congratulate these graduates on their degrees.

The programme, like the OII itself, was hatched through four years of discussions around how the Institute (which is a department at Oxford University) should move into teaching. Immediately after my arrival we began organizing the OII’s Summer Doctoral Programme (SDP), which was an instant success and continues to draw top doctoral students from across the world who want to hone their thesis through an intensive summer programme with other doctoral students focused on Internet studies. The positive experience we had with this programme led us to move quickly to set up the DPhil – and four years is relatively quick in Oxford time.

As I told our alumni, the quality of our doctoral students has been largely responsible for the esteem the OII was able to gain across the university and colleges of Oxford. That and the international visibility of the OII enabled the department to later develop our Masters programme, and continue to attract excellent faculty and students from around the world. th-1

I am certain the OII DPhil programme has and will continue to progress since I left Oxford in 2014, such as in adding such strong faculty as Phil Howard and Gina Neff. However, I believe its early success was supported by four key principles that were part of our founding mission:

First, it was anchored in the social sciences. The OII is a department within the Division of Social Sciences at Oxford, which includes the Law Faculty. In 2002, but even since, this made us relatively unique given that so many universities, particular in the USA, viewed study of the Internet as an aspect of computer sciences and engineering. It is increasingly clear that Internet issues are multidisciplinary, and need a strong social science component that the social sciences should be well equipped to contribute. Many social sciences faculty are moving into Internet studies, which has become a burgeoning field, but the OII planted Internet studies squarely in the social sciences.

Secondly, our DPhil emphasized methods from the beginning. We needed to focus on methods to be respected across the social sciences in Oxford. But also we knew that the OII could actually move the social sciences forward in such areas as online research, later digital social science, and big data analytics as applied to the study of society. The OII did indeed help move the methods in the social sciences at Oxford into the digital age, such as through its work on e-Science and digital social research.

Thirdly, while it is somewhat of a cliché that research and teaching can complement each other, this was always the vision for the OII DPhil programme. And it happened in ways more valuable than we anticipated.

Finally, because Oxford was a green field in the areas of media, information and communication studies, with no legacy departments vying to own Internet studies, we could innovate around Internet studies from a multi-disciplinary perspective. And we found that many of the best students applying to the OII were multidisciplinary in their training even before they arrived, and understood the value of multidisciplinary, problem-focused research and teaching.

As you can see, I found the discussion today to be very stimulating. My 12 years at Oxford remains one of the highlights of my career, but it is so much enhanced by seeing our alumni continue to be engaged with the institute. So many thanks to Dame Stephanie Shirley for endowing the OII, and the many scholars across Oxford University and its Colleges, such as Andrew Graham and Colin Lucas, for their confidence and vision in establishing the OII and making the DPhil programme possible.

Remember, the OII was founded in 2001, shortly after the dotcom bubble burst and at a university that is inherently skeptical of new fields. Today the Internet faces a new wave of criticisms ranging from online bullying to global cyber security, including heightened threats to freedom of expression and privacy online. With politicians worldwide ratcheting up attacks on whistleblowers and social media, claiming undue political influence, threats to the Internet are escalating. This new wave of panic around the Internet and social media will make the OII and other departments focused on Internet studies even more critical in the coming years.

 

 

OII Farewell

Thanks to all of my colleagues for such wonderful and creative farewell celebrations at the OII. The presentations by Vicki, Helen, Jay Blumler (with a song) and Dame Stephanie were unforgettable. Our staff dinner was exemplary of the team spirit and collaborative culture the Institute has developed and will never lose. It was great fun, but also it said so much about the Institute and how we are consistently grateful and appreciative of one another. The strap line of your card, ‘Things won’t be the same without you’, goes both ways, my friends. Thanks also for the mementos of my tenure – the amazing 19th century Big Ben; the OII mug, polo shirt and whisky glasses etched with our own flying super-hero (courtesy of Steve Russell); the Oxford tie, tea towel, and calendar; your words both in the Beach Boys’ song, so uniquely performed by the staff, and in the personal notes from students and staff, present and former, and more – a literal treasure chest.

I will never forget my tenure as the founding director of the greatest multi-disciplinary department of Internet Studies at a major university, which owes everything to the supportive and talented team we put together. My thanks again and best wishes to all of the many individuals who have contributed to our success over the last 13 years. You’ve established the traditions that will continue to keep the Institute at the forefront of research on the Internet and its societal implications.

OII Polo Shirt and Treasure Chest
OII Polo Shirt and Treasure Chest

Inspiring a Startup Mentality in Legacy IT Organizations – FCC CIO at the OII on 19 June, 4-5pm

Modernizing and Inspiring a “Startup Mentality” in Legacy Information Technology Organizations

Speakers: David A. Bray, Oxford Martin Associate and CIO of the U.S. FCC, Yorick Wilks, and Greg Taylor

19 June 2014 from 4-5 pm

OII Seminar Room, 1 St Giles’, Oxford

By some estimates, 70% of IT organization budgets are spent on maintaining legacy systems. These costs delays needed transitions to newer technologies. Moreover, this cost estimate only captures those legacy processes automated by IT; several paper-based, manual processes exist and result in additional hidden, human-intensive costs that could benefit from modern IT automation.

This interactive discussion will discuss the opportunities and challenges with inspiring a “startup mentality” in legacy information technology organizations. Dr. David Bray, will discuss his own experiences with inspiring a “startup mentality” in legacy IT organizations as well as future directions for legacy organizations confronted with modernization requirements. The discussion will be chaired by OII’s Dr. Greg Taylor, and Yorick Wilks, an OII Research Associate, and Professor of Artificial Intelligence in the Department of Computer Science at the University of Sheffield, will offer his comments and responses to David’s ideas before opening the discussion to participation from the audience.

David A. Bray at OII
David A. Bray at OII

Information about the speakers:

David A. Bray: http://www.oxfordmartin.ox.ac.uk/cybersecurity/people/575

Yorick Wilks: http://www.oii.ox.ac.uk/people/?id=31

Greg Taylor: http://www.oii.ox.ac.uk/people/?id=166

Web Science 2014: CALL FOR PARTICIPATION

The 6th ACM Web Science Conference will be held 23-26 June 2014 on the beautiful campus of Indiana University, Bloomington. Web Science continues to focus on the study of information networks, social communities, organizations, applications, and policies that shape and are shaped by the Web.

The WebSci14 program includes 29 paper presentations, 35 posters with lightening talks, a documentary, and keynotes by Dame Wendy Hall (U. of Southampton), JP Rangaswami (Salesforce.com), Laura DeNardis (American University) and Daniel Tunkelang (LinkedIn). Several workshops will be held in conjunction with the conference on topics such as Altmetrics, computational approaches to social modeling, the complex dynamics of the Web, the Web of scientific knowledge, interdisciplinary coups to calamities, Web Science education, Web observatories, and Cybercrime and Cyberwar. Conference attendees will have an opportunity to enjoy the exhibit Places & Spaces: Mapping Science, meant to inspire cross-disciplinary discussion on how to track and communicate human activity and scientific progress on a global scale. Finally, we will award prizes for the most innovative visualizations of Web data. For this data challenge, we are providing four large datasets that will remain publicly available to Web
scientists.

For more information on the program, registration, and a full schedule please visit http://WebSci14.org and follow us on Twitter (@WebSciConf) or like us on Facebook
(https://www.facebook.com/WebSci14).

The Internet Trust Bubble Amid Rising Concern over Personal Data: WEF Report

The World Economic Forum has released a set of complementary reports, including one written by an OII team, entitled ‘The Internet Trust Bubble: Global Values, Beliefs and Practices’, by William H. Dutton, Ginette Law, Gillian Bolsover, and Soumitra Dutta. Our report is a follow up to our earlier WEF study entitled ‘The New Internet World’. Both are based on global Web-based surveys of Internet users, and conducted by the OII in collaboration with the WEF, comScore, and with support from ictQATAR.

Our survey research was conducted in 2012, prior to the Snowden revelations, so what we found to be a potential risk to trust in the Internet can only be greater than what we found pre-Snowden. That said, there is no certainty that the concerns raised over Snowden will reach the general public, or that Internet users will not adapt to risks to personal data and surveillance in order to enjoy the convenience and other benefits of Internet use. There is clearly a need for continuing research on attitudes, beliefs, and practices in related areas of security, privacy, authenticity and trust in the Internet, but also greater efforts to support public awareness campaigns, such as is a current focus of work in our Global Cyber Security Capacity Center at the Oxford Martin School.

We found strong support for the values and attitudes underpinning freedom of expression on the Internet. Users in the emerging nations of the Internet world are in some respects more supportive of freedom of expression online than are users in the nations of the Old Internet World. In fact, in 2012, users from the nations more recently moving online, those who compose the New Internet World, are more likely to support norms underpinning freedom of expression online than do users from nations of the Old Internet World, who were early to adopt the Internet, as well as reporting higher levels of perceived freedom in expressing themselves on the Internet.

However, there is concern worldwide over the privacy of personal information, but this is not evenly distributed. Users in nations that have more recently embraced the Internet appeared somewhat less aware of the risks and more trusting in their use of the Internet. Moreover, many users around the world indicate that they are not taking measures designed to protect their privacy and security online. In addition, there is evidence of large proportions of the online world lacking trust in the authenticity and appropriateness of information on the Internet, often looking towards the government to address problems in ways that could put values of the Internet at risk, such as freedom of expression. At the same time, there is a surprisingly high proportion of users that take governmental monitoring and surveillance of the Internet for granted, even before the disclosures of Edward Snowden and his claims about US and other governmental surveillance initiatives. These are illustrations of a pattern of attitudes and beliefs that might well signal a looming crisis of trust in the freedom, privacy, security and value of the Internet as a global information and communication resource.

Building on the theme of trust, A. T. Kearney prepared a related WEF report, entitled ‘Rethinking Personal Data: A New Lens for Strengthening Trust’. In many respects, it moves forward to identify steps that could be taken to address growing concerns over trust in the Internet.

The third report was prepared by a team of researchers at Microsoft, who also build on issues of personal data and trust. All are part of the World Economic Forum’s multi-year ‘Rethinking Personal Data’ initiative.

Links to all three reports are below:

http://www3.weforum.org/docs/WEF_InternetTrustBubble_Report2_2014.pdf   

http://www3.weforum.org/docs/WEF_RethinkingPersonalData_TrustandContext_Report_2014.pdf

http://www3.weforum.org/docs/WEF_RethinkingPersonalData_ANewLens_Report_2014.pdf

Coincidentally, I gave a keynote on the ‘Internet trust bubble’ in Shenzhen, China, at the Huawei Strategy and Technology Workshop today, with the release of this report, 13 May 2014. I am doubtful that our data convinced many in the audience that there was reason for concern, as most discussion was rather optimistic about the future of mobile and the Internet, but I do believe there is international recognition of

Politics and the Internet

Dutton, William H. with the assistance of Elizabeth Dubois (2014) (ed.) Politics and the Internet. London and New York: Routledge. See: http://www.routledge.com/books/details/9780415561501/

Delighted to see the first pre-publication copy of the four volume set on Politics and the Internet, edited by me with the assistance of Elizabeth Dubois. It is within a larger set of books published in the Critical Concepts in Political Science series by Routledge. Designed as a reference for libraries and scholars within this area, eighty four chapters reprint work that is foundational to the study of politics and the Internet, comprising four volumes:

I. Politics in Digital Age – Reshaping Access to Information and People

II. Compaigns and Elections

III. Netizens, Networks and Political Movements

IV. Networked Institutions and Governance

Politics and the Internet

A common complaint of the Internet age is that we have little time to look back, and therefore risk giving inordinate attention to the most recent work. It is certainly the case that the study of politics and the Internet is developing at such a pace that it will be far more difficult to reflect the full range of research over the coming decades. However, this collection is designed to be of value well into the future by capturing key work in this burgeoning and increasingly important field and making it accessible to a growing international body of scholars who can build on its foundations. I hope you suggest this reference for your library.

Nominate an Inspiring Digital Social Innovation: Deadline 16 August 2013

 I am trying to help colleagues identify some of the most inspiring social innovations supported by the Internet and related digital technologies. Are there critical social challenges that are being addressed through digital innovations? Help us identify them.

The innovations selected will become part of a on-going public database on digital social innovations that might inspire related projects, while recognizing the innovators. There is a good overview of the idea in Wired. To submit a nomination, just send Nominet Trust 100 a URL (nothing else is needed) in an email or a tweet with the hashtag #NT100.

The selection process is being supported and organized by The Nominet Trust, a trust established in 2008 by Nominet, the UK’s domain name registry. Nominet Trust set up the Trust to ‘invest in people committed to using the internet to address big social challenges.’ To accomplish this, they set up a steering committee, headed by Charles Leadbeater, to help create a list of the 100 ‘most inspiring applications of digital technology for social good …’.

I am delighted to be part of that committee and would appreciate your thoughts on any application that you have found to be creatively addressing a social challenge. You can read more about the process, called Nominet Trust 100, but before you move on to other activities, I really hope you can share your own perspective on what you believe to be an inspiring digital social innovation. Don’t hesitate to nominate a project with which you are associated. Nominations will be a very important part of the selection process, but they will be reviewed and discussed by the steering committee. There are only a few more days before the nomination process closes.

More information on the Nominet Trust 100 at http://nt100.org.uk/