Forbes Global CEO Conference: Artificial Intelligence Evolution Brings Individual Empowerment, Tech Experts Say

Artificial intelligence experts speaking at the Forbes Global CEO Conference in Singapore on Tuesday expressed optimism about the future of AI, despite some worries the fast-growing technology could make dramatic changes in business and society.

“I believe the current evolution of generative AI is a massive acceleration of a very long-term pattern of leveraging technology as a toolset,” Eduardo Saverin, cofounder and co-CEO of Singapore-based venture capital firm B Capital, said on a panel at the Forbes Global CEO Conference. “Where this potentially starts arriving into a phase change is this idea that through time, computers can effectively program themselves…we’re very early in that evolution, or that phase change, and it’s incredibly exciting.”

The Facebook (now Meta) cofounder, who topped this year’s Singapore’s 50 Richest list with a net worth of $16 billion, added, “What’s empowering about [AI technology] is that it’s driving in some ways a realism to the idea that the world can be personalized down to the level of one.”

This includes tailored content, such as the idea of a hyper-personalized social media newsfeed—one of Meta’s “key evolutions” during the company’s early days, Saverin notes—that allows users to scroll through relevant content based on their interests.

The other panelists were Meng Ru Kuok, group CEO and founder of Caldecott Music Group, Antoine Blondeau, cofounder and managing Partner of Alpha Intelligence Capital, and Rohan Narayana Murty, founder and CTO of Soroco.

In creative fields like the music industry, AI-driven developments are providing people the opportunity “to do things that they couldn’t do before, and do them at scale, and potentially autonomously,” said Kuok, who is also founder of music production app BandLab.

“Music has actually been using algorithms and AI and innovations and technology for a long time, whether it’s a transition from the recording studio, all the way to personal computing,” said Kuok. “Even as an operator, the speed and the unexpected nature of the technology shift has changed even all the old perspectives on the opportunity at hand.”

Still, developments are threatened by bad actors who may use AI tools to “create recursive, autonomous things” that introduce risks, added Kuok, citing concerns such as fraud related to music streaming. “I’m less worried about the computer, I’m worried more about the human,” he said. “That’s something for us to really think about, from safeguards…historically, it’s been humans who have been the problem as well as the solution.”

“Everything that is consumer-facing is going to be incredibly enhanced,” noted Blondeau, who worked on the project that became Apple’s voice assistant, Siri. These consumer-facing fields include healthcare and education, which he predicts AI will augment over the next few years. He raised the possibility of AI-powered drug discovery that could potentially identify variants of diseases before they emerge, or cures to debilitating conditions like cancer. “I always say that AI will save us before it kills us,” he said.

“AI will make us feel longer, it will make us hyper-productive…this is the hope, and it’s a massive hope,” Blondeau added. “The fear is that we’ll end up in a video game, right? We’ll have nothing much to do, and the machines will have to do the hard work.”

To Murty, some of the concerns surrounding AI may involve its integration of systems that emulate the way humans think. “I don’t think [AI] is cognition, and I think there’s a lot of confusion around this,” he said. AI operates “as a black box” to simulate certain parts of human cognition, but not its entirety. “When we start thinking about cognition, that’s the last refuge, or bastion of human difference in this world, it gets quite scary,” Murty added.

Yet “AI is the perfect tool” to unlock some of the problems with identifying areas of improvement within companies, leveraging data instead of questionnaires. “For the first time, we have an opportunity to affect every single organization, in terms of how they get work done, in terms of how they think,” Murty said. “The very question of how office work ought to be done differently or better is in some sense best answered by a machine, not a person.”

AI’s potential to outperform humans reflects how any rapid innovation brings a “potential for human displacement,” said Saverin, but AI can create a “win-win scenario” for both small and large businesses. “We are ultimately humans, and we’re going to want to experience the world and digest the world in a human way,” he said.

“These [AI] technologies will make corporations efficient, profit centers more efficient…and there will be an infinite path of potential learning and enablement of what you can do as an individual, but how you earn money, and how you become an active participant in income generation in the world will evolve,” Saverin said. “We need to be very careful to enable that evolution to go in the right direction.”

MORE FROM FORBES GLOBAL CEO CONFERENCE

MORE FROM FORBES21st Forbes Global CEO Conference Opens In SingaporeMORE FROM FORBESCP’s Dhanin Chearavanont Receives Malcolm S. Forbes Lifetime Achievement AwardMORE FROM FORBESIndonesia’s Finance Minister Upbeat On Growth As Domestic Demand ResilientMORE FROM FORBESSingapore Vows To Remain Vigilant Amid Money Laundering Probe: DPM Heng

Source link

#Forbes #Global #CEO #Conference #Artificial #Intelligence #Evolution #Brings #Individual #Empowerment #Tech #Experts

Explained | The Hiroshima process that takes AI governance global

The annual Group of Seven (G7) Summit, hosted by Japan, took place in Hiroshima on May 19-21, 2023. Among other matters, the G7 Hiroshima Leaders’ Communiqué initiated the Hiroshima AI Process (HAP) – an effort by this bloc to determine a way forward to regulate artificial intelligence (AI).

The ministerial declaration of the G7 Digital and Tech Ministers’ Meeting, on April 30, 2023, discussed “responsible AI” and global AI governance, and said, “we reaffirm our commitment to promote human-centric and trustworthy AI based on the OECD AI Principles and to foster collaboration to maximise the benefits for all brought by AI technologies”.

Even as the G7 countries are using such fora to deliberate AI regulation, they are acting on their own instead of waiting for the outcomes from the HAP. So while there is an accord to regulate AI, the discord – as evident in countries preferring to go their own paths – will also continue.

What is the Hiroshima AI process?

The communiqué accorded more importance to AI than the technology has ever received in such a forum – even as G7 leaders were engaged with other issues like the war in Ukraine, economic security, supply chain disruptions, and nuclear disarmament. It said that the G7 is determined to work with others to “advance international discussions on inclusive AI governance and interoperability to achieve our common vision and goal of trustworthy AI, in line with our shared democratic value”.

To quote further at length:

“We recognise the need to immediately take stock of the opportunities and challenges of generative AI, which is increasingly prominent across countries and sectors, and encourage international organisations such as the OECD to consider analysis on the impact of policy developments and Global Partnership on AI (GPAI) to conduct practical projects. In this respect, we task relevant ministers to establish the Hiroshima AI process, through a G7 working group, in an inclusive manner and in cooperation with the OECD and GPAI, for discussions on generative AI by the end of this year.

These discussions could include topics such as governance, safeguard of intellectual property rights including copyrights, promotion of transparency, response to foreign information manipulation, including disinformation, and responsible utilisation of these technologies.”

The HAP is likely to conclude by December 2023. The first meeting under this process was held on May 30. Per the communiqué, the process will be organised through a G7 working group, although the exact details are not clear.

Why is the process notable?

While the communiqué doesn’t indicate the expected outcomes from the HAP, there is enough in there to indicate what values and norms will guide it and where it will derive its guiding principles, based on which to govern AI, from.

The communiqué as well as the ministerial declaration also say more than once that AI development and implementation must be aligned with values such as freedom, democracy, and human rights. Values need to be linked to principles that drive regulation. To this end, the communiqué also stresses fairness, accountability, transparency, and safety.

The communiqué also spoke of “the importance of procedures that advance transparency, openness, and fair processes” for developing responsible AI. “Openness” and “fair processes” can be interpreted in different ways, and the exact meaning of the “procedures that advance them” is not clear.

What does the process entail?

An emphasis on freedom, democracy, and human rights, and mentions of “multi-stakeholder international organisations” and “multi-stakeholder processes” indicate that the HAP isn’t expected to address AI regulation from a State-centric perspective. Instead, it exists to account for the importance of involving multiple stakeholders in various processes and to ensure the latter are fair and transparent.

The task before the HAP is really challenging considering the divergence among G7 countries in, among other things, regulating risks arising out of applying AI. It can help these countries develop a common understanding on some key regulatory issues while ensuring that any disagreement doesn’t result in complete discord.

For now, there are three ways in which the HAP can play out:

1. It enables the G7 countries to move towards a divergent regulation based on shared norms, principles and guiding values;

2. It becomes overwhelmed by divergent views among the G7 countries and fails to deliver any meaningful solution; or

3. It delivers a mixed outcome with some convergence on finding solutions to some issues but is unable to find common ground on many others.

Is there an example of how the process can help?

The matter of intellectual property rights (IPR) offers an example of how the HAP can help. Here, the question is whether training a generative AI model, like ChatGPT, on copyrighted material constitutes a copyright violation. While IPR in the context of AI finds mention in the communiqué, the relationship between AI and IPR and in different jurisdictions is not clear. There have been several conflicting interpretations and judicial pronouncements.

The HAP can help the G7 countries move towards a consensus on this issue by specifying guiding rules and principles related to AI and IPR. For example, the process can bring greater clarity to the role and scope of the ‘fair use’ doctrine in the use of AI for various purposes.

Generally, the ‘fair use’ exception is invoked to allow activities like teaching, research, and criticism to continue without seeking the copyright-owner’s permission to use their material. Whether use of copyrighted materials in datasets for machine learning is fair use is a controversial issue.

As an example, the HAP can develop a common guideline for G7 countries that permits the use of copyrighted materials in datasets for machine-learning as ‘fair use’, subject to some conditions. It can also differentiate use for machine-learning per se from other AI-related uses of copyrighted materials.

This in turn could affect the global discourse and practice on this issue.

The stage has been set…

The G7 communiqué states that “the common vision and goal of trustworthy AI may vary across G7 members.” The ministerial declaration has a similar view: “We stress the importance of international discussions on AI governance and interoperability between AI governance frameworks, while we recognise that like-minded approaches and policy instruments to achieve the common vision and goal of trustworthy AI may vary across G7 members.” This acknowledgment, taken together with other aspects of the HAP, indicates that the G7 doesn’t expect to harmonise their policies on regulations.

On the other hand, the emphasis on working with others, including OECD countries and on developing an interoperable AI governance framework, suggests that while the HAP is a process established by the G7, it still has to respond to the concerns of other country-groups as well as the people and bodies involved in developing international technical standards in AI.

It’s also possible that countries that aren’t part of the G7 but want to influence the global governance of AI may launch a process of their own like the HAP.

Overall, the establishment of the HAP makes one thing clear: AI governance has become a truly global issue that is likely to only become more contested in future.

Krishna Ravi Srinivas is with RIS, New Delhi. Views expressed are personal.

Source link

#Explained #Hiroshima #process #takes #governance #global

Explained | Could a photography dispute in the U.S. affect ChatGPT and its cousins?

Copyright law protects the work of diverse artists, including photographers, as well as provides a set of exclusive rights for artists over their creative output. This includes controlling the manner in which others reproduce or modify their work. However, these exclusive rights are balanced with the rights of the users of such work, including other artists who might want to build on or comment on them, with the help of diverse exceptions under copyright law.

What is exempt from infringement liability?

Different jurisdictions follow different approaches to exceptions. Some, particularly countries in continental Europe, adopt the ‘enumerated exceptions approach’: the use in question needs to be specifically covered under the statute to be considered as an exception to infringement. Some others, including the U.S., follow an open-ended approach that doesn’t specify exemptions beforehand; instead, they have guidelines about the types of uses that can be exempted.

The U.S. courts primarily consider four factors when determining whether a particular use can be considered to be an instance of fair use: (1) purpose and character of the use; (2) nature of the copyrighted work; (3) amount and substantiality of the portion taken by the defendant, and (4) effect of the use on the potential market of the plaintiff’s work.

Of these, U.S. courts have been giving the highest importance to the first factor. In particular, whether the use of something can be considered “transformative” has often played the most critical role in determining the final outcome in a fair-use case.

This open-ended approach to exceptions provides U.S. copyright law considerable flexibility and strength to deal with challenges posed by emerging technologies on the copyright system. However, it has a major limitation: there is no way to know whether an activity will be exempted from liabilities until after litigation. That is, it is very hard to predict ex ante whether an activity will be exempted from copyright infringement liabilities.

The recent decision of the U.S. Supreme Court in Andy Warhol Foundation for the Visual Arts Inc. v. Goldsmith et al. has just added more unpredictability to this process – with implications for how we regulate a powerful form of artificial intelligence.

What is the Andy Warhol Foundation case?

Andy Warhol with his pet dachshund, 1973.
| Photo Credit:
Jack Mitchell, CC BY-SA 4.0

Known for her concert and portrait shots, Lynn Goldsmith photographed the famous musician Prince in 1981. One of those photos was licensed in 1984 to Vanity Fair magazine for use as an “artist reference”. The licence specifically said the illustration could appear once as a full page element and once as a one-quarter page element, in the magazine’s November 1984. Vanity Fair paid Ms. Goldsmith $400 for the licence.

It then hired the celebrated visual artist Andy Warhol to work on the illustration. Mr. Warhol made a silkscreen portrait of Prince using Goldsmith’s photo. It appeared in the magazine with appropriate credits to Ms. Goldsmith. But while the licence had authorised only one illustration, Mr. Warhol additionally created 13 screen prints and two pencil sketches.

In 2016, Condé Nast, the media conglomerate that publishes Vanity Fair, approached the Andy Warhol Foundation (AWF) to reuse the 1984 illustration as part of a story on Prince. But when they realised that there were more portraits available, they opted to publish one of them instead (an orange silkscreen portrait). And as part of the licence to use it, they paid $10,000 to AWF, and nothing to Ms. Goldsmith.

When AWF realised that Ms. Goldsmith may file a copyright infringement suit, it filed a suit for declaratory judgment of non-infringement. Ms. Goldsmith then counter-sued AWF for copyright infringement.

What did the courts find?

The front façade of the Supreme Court of the United States in Washington, DC, October 19, 2020.

The front façade of the Supreme Court of the United States in Washington, DC, October 19, 2020.
| Photo Credit:
Ian Hutchinson/Unsplash

First, a district court summarily ruled in favour of AWF, opining that Mr. Warhol’s use of Ms. Goldsmith’s photo constituted fair-use. The court banked on the first factor and held that Mr. Warhol’s work was “transformative” as they “have a different character, give Goldsmith’s photograph a new expression, and employ new aesthetics with creative and communicative results distinct from Goldsmith’s”.

It also observed that Mr. Warhol’s work added something new to the world of art “and the public would be deprived of this contribution if the works could not be distributed”.

However, the Court of Appeals for the Second Circuit reversed these findings and disagreed that Mr. Warhol’s use of the photograph constituted fair-use. The case subsequently went to the U.S. Supreme Court, which delivered its verdict on May 18, 2023.

The majority of judges concluded that if an original work and secondary work have more or less similar purposes and if the secondary use is of a commercial nature, the first factor may not favour a fair-use interpretation – unless there are other justifications for copying.

In this particular instance, according to the majority decision, both Ms. Goldsmith’s photos and Mr. Warhol’s adaptations had more or less the same purpose: to portray Prince. The majority said that while copying may have helped convey a new meaning or message, that in itself did not suffice under the first factor.

The dissenting opinion focused extensively on how art is produced, particularly the fact that no artists create anything out of a vacuum. Justice Elena Kagan, author of this opinion, wrote of the need for a broader reading of ‘transformative use’ for the progress of arts and science. The dissenters also opined that Mr. Warhol’s addition of important “new expression, meaning and message” tilted the first factor in favour of a finding of fair-use.

How does this affect generative AI?

A view of the ChatGPT website.

A view of the ChatGPT website.
| Photo Credit:
Rolf van Root/Unsplash

While this dispute arose in the context of use of a photograph as an artistic reference, the implications of the court’s finding are bound to ripple across the visual arts at large. The majority position could challenge the manner in which many generative artificial intelligence (AI) tools, such as ChatGPT4, MidJourney, and Stable Diffusion, have been conceived. These models’ makers ‘train’ them on text, photos, and videos strewn around the internet, copyrighted or not.

For example, if someone is using a generative AI tool to create pictures in the style of Mr. Warhol, and if the resulting images are similar to any of the work of Mr. Warhol, a court is likelier now to rule against this being described as fair use, taking the view that both the copyrighted work and the models’ output serve similar purposes.

The majority’s reliance on the commercial nature of the use may also result in substantial deviation from the established view: that the commercial nature of the use in itself cannot negate a finding of fair use. But the true extent of the implications of the verdict will be clear only when trial courts begin applying the ratio in this judgment to future cases.

What about Indian copyright law?

There may not be any direct implications for Indian copyright law, as the framework of exceptions here is different. India follows a hybrid model of exception in which fair dealing with copyrighted work is exempted for some specific purposes under Section 52(1)(a) of the Copyright Act 1957. India also has a long list of enumerated exceptions.

This said, the observations by the U.S. Supreme Court’s decision could have a persuasive effect, particularly when determining ‘fairness’ as part of a fair-dealing litigation. Then again, only time will tell which one will have a more persuasive effect – the majority or the minority.

Arul George Scaria is an associate professor at the National Law School of India University (NLSIU).

Source link

#Explained #photography #dispute #affect #ChatGPT #cousins