Emerj at the Fourth OECD Global Forum on Digital Security for Prosperity

Matthew DeMello

Matthew is Senior Editor at Emerj, focused on enterprise AI use-cases and trends. He previously served as podcast producer with CrossBorder Solutions, a venture-back AI-enabled tax solutions firm. Prior, Matthew served three years at the World Policy Institute as a news editor and podcast producer.

4th OECD Global Forum@2x-min (3)

Event Title: 4th OECD Global Forum on Digital Security for Prosperity

Event Host: OECD

Location: Paris, France

Date: March 13-14, 2023

Team Member: Daniel Faggella, Emerj Technology Research founder and CEO, Head of Research 

What Happened

Daniel’s first collaboration with the OECD began by giving the keynote address at the organization’s symposium on life sciences AI in 2019. The following year just before the onset of the pandemic, Daniel joined the launch of the organization’s AI Policy Observatory and interviewed AI governance leaders from the US and Europe for the ‘AI in Business’ podcast.

Last month, Daniel attended the OECD’s Fourth Global Forum on Digital Security for Prosperity event on March 13th and 14th in Paris, France, on behalf of Emerj. The European NGO invited him to join a panel discussion on the first day of the conference on security risks in AI. 

Emerj CEO and Head of Research Daniel Faggella on the panel for security risks in AI at the 4th OECD Global Forum on Digital Security for Prosperity in March.

The panel was scheduled for the first day and framed within the second of three topical themes for the conference:

  • As policymakers move to implement security policies, how can we ease the implementation of better practices across the Internet of Things (IoT)?
  • How can AI policies be embedded with ‘secure by design’ approaches?
  • How can we better promote collaboration between technical and policy-making communities to address future security challenges within new and emerging technologies? 

Day one of the event covered the first two themes, while the second day focused only on the third and final theme. 

Other distinguished guests serving on the AI security panel with Daniel included Poppy Gustafsson, CEO of Darktrace; Victoria Krakovna, a research scientist at Deepmind; Yutaka Miyake, Director of Information System and Security Department General Affairs Division at KDDI Research; Clara Neppel, Senior Director of the IEEE; and Taylor Reynolds, Technology Policy Director of the MIT Internet Policy Research Initiative (IPRI). 

The session shared insights from both the public and private sectors, highlighting challenges and risks in black box problems, bias and discrimination, vulnerability to attacks, lack of human oversight, and privacy. The questions Daniel asked the panel included inquiries into the challenges in developing security policies to better embed security into AI capabilities and models.

The conversation discussed ways the private sector, academia, and standard-setting associations could work together to achieve better outcomes.

Finally, the group concluded by listing the lessons learned from other technology policies in the public and private sectors that can best transfer to the current era of IoT and AI-enhanced devices. Daniel facilitated a Q&A session with the audience following the panel discussion, facilitating perspectives and sharing different ideas of automated, trustworthy, explainable AI security.

What We Learned

Speed of Change and Requirement for Dialogue with GenAI

ChatGPT quickly became the unscheduled elephant in the room for the panel’s agenda – reflecting a sense of general concern not only felt at the conference but across the world. Panelists expressed concern with the lack of accuracy in the content generated by tools like ChatGPT. 

Quickly, it became clear that these tools are so nascent – even in widespread global use and experimentation – that they only become more unpredictable with future use – representing an ever-increasing risk to international security, if negligible at the moment.

Once articulating the dilemma within the panel discussion, a few noteworthy camps of opinion emerged from the group of experts. A conflict arose between advocates for a near-term set of rules and stringencies on generative AI and large language models and those who felt policy should wait to see where these technologies will ultimately affect workflows before enacting new norms. 

A point of agreement for all panel guests was that the pace of technologies was notably faster than it was even a few years ago. Breakthroughs are becoming so rapid that realistically, they could agree that a consistent, ongoing dialog between academia, government, intergovernmental organizations, and the private sector was needed to assess where these technologies will negatively impact society in real-time. 

Experimenting with Cyber Security Ratings for Consumer Products 

A recurring theme permuting many discussions throughout the conference returned to how safety-related security labels and ratings for consumer products in the emerging IoT. From baby monitors to doorbells, to cell phones and onwards – as more goods connect to the internet, how easy is it to hack them? 

Now standards are emerging to rate these systems based on what kind of information can be procured or damaged by criminal influences. These security risks range from privacy and property. Considering IoT-automated home security systems with distinct operations in the physical world, there is also a real-life danger to these security concerns. 

Some challenges to labels and ratings expressed by conference panelists included that training consumers to understand what these different kinds of ratings and labelings mean is complicated – with no clear solutions for overcoming ensuing messaging problems.

The Head of the Cyber Security Certification Centre at the Cyber Security Agency of Singapore, Soon-Chia Lim offered his account of how city officials have approached the problem in a panel on IoT security legislation. While the initiative is in its early days, Lim had much to share about the nuances of the results they are seeing so far. 

The panel for policy responses to security challenges in AI at the 4th OECD Global Forum on Digital Security for Prosperity.

Another challenge expressed on the panel concerning labeling was the exponentially increasing risk, compounded by new and emerging kinds of risk.

For example, the risk to personal information (credit card transactions, medical history, the contents of one’s personal property) differs significantly from the risk to one’s self or personal property. Sufficient classification of these risks based on importance began running up against trying to anticipate future forms of risk – leading to a paranoia of ‘unkown unknowns’ among those at the forefront of solving security challenges. 

Panelists agreed, however, that the question over measurement and classification must be solved before authorities can hope to properly inform the public of the nature and degree of risk involved in purchasing these products. Integral to public communications among businesses, governments, and standards organizations is ensuring that consumers always know what’s happening to their data.

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe