Explained | The Hiroshima process that takes AI governance global

The annual Group of Seven (G7) Summit, hosted by Japan, took place in Hiroshima on May 19-21, 2023. Among other matters, the G7 Hiroshima Leaders’ Communiqué initiated the Hiroshima AI Process (HAP) – an effort by this bloc to determine a way forward to regulate artificial intelligence (AI).

The ministerial declaration of the G7 Digital and Tech Ministers’ Meeting, on April 30, 2023, discussed “responsible AI” and global AI governance, and said, “we reaffirm our commitment to promote human-centric and trustworthy AI based on the OECD AI Principles and to foster collaboration to maximise the benefits for all brought by AI technologies”.

Even as the G7 countries are using such fora to deliberate AI regulation, they are acting on their own instead of waiting for the outcomes from the HAP. So while there is an accord to regulate AI, the discord – as evident in countries preferring to go their own paths – will also continue.

What is the Hiroshima AI process?

The communiqué accorded more importance to AI than the technology has ever received in such a forum – even as G7 leaders were engaged with other issues like the war in Ukraine, economic security, supply chain disruptions, and nuclear disarmament. It said that the G7 is determined to work with others to “advance international discussions on inclusive AI governance and interoperability to achieve our common vision and goal of trustworthy AI, in line with our shared democratic value”.

To quote further at length:

“We recognise the need to immediately take stock of the opportunities and challenges of generative AI, which is increasingly prominent across countries and sectors, and encourage international organisations such as the OECD to consider analysis on the impact of policy developments and Global Partnership on AI (GPAI) to conduct practical projects. In this respect, we task relevant ministers to establish the Hiroshima AI process, through a G7 working group, in an inclusive manner and in cooperation with the OECD and GPAI, for discussions on generative AI by the end of this year.

These discussions could include topics such as governance, safeguard of intellectual property rights including copyrights, promotion of transparency, response to foreign information manipulation, including disinformation, and responsible utilisation of these technologies.”

The HAP is likely to conclude by December 2023. The first meeting under this process was held on May 30. Per the communiqué, the process will be organised through a G7 working group, although the exact details are not clear.

Why is the process notable?

While the communiqué doesn’t indicate the expected outcomes from the HAP, there is enough in there to indicate what values and norms will guide it and where it will derive its guiding principles, based on which to govern AI, from.

The communiqué as well as the ministerial declaration also say more than once that AI development and implementation must be aligned with values such as freedom, democracy, and human rights. Values need to be linked to principles that drive regulation. To this end, the communiqué also stresses fairness, accountability, transparency, and safety.

The communiqué also spoke of “the importance of procedures that advance transparency, openness, and fair processes” for developing responsible AI. “Openness” and “fair processes” can be interpreted in different ways, and the exact meaning of the “procedures that advance them” is not clear.

What does the process entail?

An emphasis on freedom, democracy, and human rights, and mentions of “multi-stakeholder international organisations” and “multi-stakeholder processes” indicate that the HAP isn’t expected to address AI regulation from a State-centric perspective. Instead, it exists to account for the importance of involving multiple stakeholders in various processes and to ensure the latter are fair and transparent.

The task before the HAP is really challenging considering the divergence among G7 countries in, among other things, regulating risks arising out of applying AI. It can help these countries develop a common understanding on some key regulatory issues while ensuring that any disagreement doesn’t result in complete discord.

For now, there are three ways in which the HAP can play out:

1. It enables the G7 countries to move towards a divergent regulation based on shared norms, principles and guiding values;

2. It becomes overwhelmed by divergent views among the G7 countries and fails to deliver any meaningful solution; or

3. It delivers a mixed outcome with some convergence on finding solutions to some issues but is unable to find common ground on many others.

Is there an example of how the process can help?

The matter of intellectual property rights (IPR) offers an example of how the HAP can help. Here, the question is whether training a generative AI model, like ChatGPT, on copyrighted material constitutes a copyright violation. While IPR in the context of AI finds mention in the communiqué, the relationship between AI and IPR and in different jurisdictions is not clear. There have been several conflicting interpretations and judicial pronouncements.

The HAP can help the G7 countries move towards a consensus on this issue by specifying guiding rules and principles related to AI and IPR. For example, the process can bring greater clarity to the role and scope of the ‘fair use’ doctrine in the use of AI for various purposes.

Generally, the ‘fair use’ exception is invoked to allow activities like teaching, research, and criticism to continue without seeking the copyright-owner’s permission to use their material. Whether use of copyrighted materials in datasets for machine learning is fair use is a controversial issue.

As an example, the HAP can develop a common guideline for G7 countries that permits the use of copyrighted materials in datasets for machine-learning as ‘fair use’, subject to some conditions. It can also differentiate use for machine-learning per se from other AI-related uses of copyrighted materials.

This in turn could affect the global discourse and practice on this issue.

The stage has been set…

The G7 communiqué states that “the common vision and goal of trustworthy AI may vary across G7 members.” The ministerial declaration has a similar view: “We stress the importance of international discussions on AI governance and interoperability between AI governance frameworks, while we recognise that like-minded approaches and policy instruments to achieve the common vision and goal of trustworthy AI may vary across G7 members.” This acknowledgment, taken together with other aspects of the HAP, indicates that the G7 doesn’t expect to harmonise their policies on regulations.

On the other hand, the emphasis on working with others, including OECD countries and on developing an interoperable AI governance framework, suggests that while the HAP is a process established by the G7, it still has to respond to the concerns of other country-groups as well as the people and bodies involved in developing international technical standards in AI.

It’s also possible that countries that aren’t part of the G7 but want to influence the global governance of AI may launch a process of their own like the HAP.

Overall, the establishment of the HAP makes one thing clear: AI governance has become a truly global issue that is likely to only become more contested in future.

Krishna Ravi Srinivas is with RIS, New Delhi. Views expressed are personal.

Source link

#Explained #Hiroshima #process #takes #governance #global