Delighted to have participated in the 2023 Global Cybersecurity Forum, 1-2 Nov, in Riyadh, Saudi Arabia #GCF2023. It provided me with a jam-packed two days of activities. I did interviews and a podcast, but my main contributions were:
First, a talk on cybersecurity capacity building, moderated by Alexandra Topalian, a truly helpful moderator and speaker based in Dubai, which was entitled ‘Closing the Talent Gap: Frameworks for Capacity Building’. My talk focused on the Global Cyber Security Capacity Maturity Model, which illustrates how the talent in cybersecurity encompasses a broad ecosystem of actors rather than only a small group of cybersecurity technical experts. Those days of reliance on the IT experts have past with the rise of the Internet and over 5 billion users.

The second was a panel, entitled ‘Cognitive Vulnerabilities: Why Humans Fall for Cyber Attacks’. This panel was moderated by Lucy Hedges, another talented moderator and tech reporter for the BBC, and included Philippe Valle, an EVP for Digital Identity & Security, Thales; Gareth Maclachlan, a Senior VP & General Manager for Network & Email Business at Trellix, and David Chow, Global Chief Technology Strategy Officer for Trend Micro, who served under US Presidents Bush, Obama and Trump. In this panel, I noted the value of developing a cybersecurity mindset, the concept of cognitive politics, and the ways in which a confirmatory bias might help explain the vulnerability of users to cyberattacks as well as to disinformation. We all need to be alert to our tendency to read, listen, and view what supports our pre-existing viewpoints, a bad media habit that influence operations and bad actors can take advantage of.

Searching for a Balanced Perspective on AI and Cybersecurity
You may have noticed that the GCF in Riyadh, occurred while the AI Summit was underway in the UK this past week, in Bletchley Park.[1] While I was far from the UK, I found some parallels between the events.
Most obviously, a major theme of the GCF was on artificial intelligence (AI) – the new-new thing in Internet Studies, cybersecurity, and public regulatory discussions. Since I work in the UK, many asked me what I thought would come of the UK’s summit. I could only speculate that more global communication about AI would be a good thing and that it might lead the field to clarify the reasons underpinning concerns over the future of AI and society and begin to develop some principles to guide AI developments. The UK summit certainly received worldwide attention.
Maybe less obvious was an effort I sensed in Riyad, and from discussions of Bletchley Park, to find a more balanced perspective between the hype that often surrounds new technologies and the fears that seem to be dominating current discussions in the context of a ‘harms framework’ – a framework that can marginalise the benefits of the Internet, social media, and AI. Can we come to a more balanced perspective?
Across the two days, I inevitably drew from my book, The Fifth Estate: The Power Shift of the Digital Age (OUP 2023). It counters conventional risks tied to social media and the Internet, showing ways in which new media can empower networked individuals in ways that complement traditional news sources and create new sources of accountability. Clearly there is a need for more empirical research on actual developments and their societal implications. Empirical research, even good descriptive studies, could help better anchor these discussions about the future in actual developments.
Note