AI news is driving tech ‘building blocks’ stocks like Nvidia. But another ‘power’ area will also benefit, say these veteran investors

Kneel to your king Wall Street.

After forecasting record revenue backed by a “killer AI app,” Nvidia has teed up the Nasdaq
COMP,
-0.61%

for a powerful Thursday open. Indeed, thanks to that chip maker and a few other generals — Microsoft, Apple, Alphabet, etc.— tech is seemingly unstoppable:

Elsewhere, the Dow
DJIA,
-0.77%

is looking rattled by a Fitch warning over debt wranglings ahead of a long weekend.

But our call of the day is accentuating the positive with some valuable insight on tech investing amid AI mania from a pair of seasoned investors.

Inge Heydorn, partner on the GP Bullhound Global Technology Fund and portfolio manager Jenny Hardy, advise choosing companies carefully given high valuations in some parts of tech that could make earnings vulnerable.

“But looking slightly beyond the volatility, tech has the advantage of being driven by many long-term secular themes which will continue to play out despite a weaker macro,” Hardy told MarketWatch in follow-up comments to an interview with the pair last week. GP Bullhound invests in leading global tech companies, with more than $1 billion in assets under management. 

“We try to make sure we’re exposed to these areas that will be more resilient. AI is the perfect example of that –- none of Microsoft, Amazon or Google will risk falling behind in the AI race -– they will all keep spending, and that will continue to drive earnings for the semiconductor companies that go into these servers higher,” said Hardy, who has worked in the investment industry since 2011.

“The way that we think about investing around [AI] is in the building blocks, the picks and shovels infrastructure, which for us is really the semiconductor companies that go into the training servers and the inference servers,” she said.

Nvidia
NVDA,
-0.49%
,
Advanced Micro Devices
AMD,
+0.14%
,
Taiwan Semiconductor
TSM,
-0.34%

2330,
+3.43%
,
Infineon
IFX,
-0.33%
,
Cisco
CSCO,
-1.02%
,
NXP
NXPI,
-4.88%
,
Microsoft
MSFT,
-0.45%
,
ServiceNow
NOW,
+0.48%

and Palo Alto
PANW,
+7.68%

are all in their portfolio. They also like the semiconductor capital equipment industry — AI beneficiaries and tailwinds from increasingly localized supply chains — with companies including KLA
KLAC,
-1.40%
,
Lam Research
LRCX,
-1.33%
,
ASML
ASML,
-2.15%

and Applied Materials
AMAT,
-1.96%
.

As Hardy points out, “lots of big tech has given us lots of certainty as it relates to AI, lots of certainty as it relates to the amount they are going to spend on AI.”

Enter Nvidia’s results, which Hardy said are proof the “AI spend race has begun…Nvidia’s call featured an impressive roster of companies deploying AI with Nvidia – AT&T, Amgen, ServiceNow – the message was that this technology adoption is widespread and really a new normal.” She said they see benefits spreading across the AI value chain — CPU providers, networking infrastructure players, memory and semicap equipment makers.

Heydorn, who traded technology stocks since 1994 and also runs a hedge fund with Hardy, says there are two big tech trends currently — “AI across the board and power semiconductors driven by EV cars and green energy projects.”

But GP Bullhound steers clear of EV makers like Tesla
TSLA,
-1.54%
,
where they see a lot of competition, notably from China. “Ultimately, they will need semiconductors and the semiconductors crucially are able to keep that pricing power in a way that the vehicle companies are not able to do because of the differences in competition,” she said.

Are the tech duo nervous about anything? “The macro economy is clearly the largest risk and further bank or real-estate problems,” said Heydorn, as Hardy adds that they are watching for second-order impacts on tech.

“One example would be enterprise software businesses with high exposure to financial services, which given those latest problems in that sector, might see a re-prioritization of spend away from new software implementations,” she said.

In the near term, Heydorn says investors should watch out for May sales numbers and any AI mentions from Taiwan via TSMC, mobile chip group MediaTek
2454,
-0.42%

and Apple
AAPL,
+0.16%

supplier Foxxconn
2354,
-0.74%

that may help with guidance for the second half of the year. “The main numbers in Taiwan will tell us where we are in inventories. They’re going to tell us if the 3-nanonmeters, that’s a new processor that’s going into Apple iPhones, are ready for production,” he said.

Read: JPMorgan says this is how much revenue other companies will get from AI this year

The markets

Nasdaq-100 futures
NQ00,
+1.90%

are up 1.8% , S&P 500
ES00,
+0.55%

futures are up 0.6%, but those for the Dow
YM00,
-0.34%

are slipping on debt-ceiling jitters. The yield on the 10-year Treasury note
TMUBMUSD10Y,
3.756%

is up 4 basis points to 3.75%.

For more market updates plus actionable trade ideas for stocks, options and crypto, subscribe to MarketDiem by Investor’s Business Daily. Follow all the stock market action with MarketWatch’s Live Blog.

The buzz

Fitch put U.S. credit ratings on ‘ratings watch negative’ due to DC “brinkmanship” as the debt-ceiling deadline nears. House Speaker Kevin McCarthy told investors not to worry as an agreement will be reached.

Best Buy
BBY,
-0.49%

stock is up 6% after an earnings beat, while Burlington Stores
BURL,
+3.19%

is slipping after a profit and revenue miss. Dollar Tree
DLTR,
-0.50%

and Ralph Lauren
RL,
+0.24%

are still to come, followed by Ulta
ULTA,
+0.17%
,
Costco
COST,
-0.44%

and Autodesk
ADSK,
+0.06%

after the close.

Nvidia is up 25% in premarket and headed toward a rare $1 trillion valuation after saying revenue would bust a previous record by 30% late Wednesday.

Opinion: Nvidia CFO says ‘The inflection point of AI is here’

But AI upstart UiPath
PATH,
-1.74%

is down 8% after soft second-quarter revenue guidance, while software group Snowflake
SNOW,
+1.13%

is off 14% on an outlook cut, while cloud-platform group Nutanix
NTNX,
-0.55%

is rallying on a better outlook.

Elf Beauty
ELF,
+1.69%

is up 12% on upbeat results from the cosmetic group, with Guess
GES,
-0.80%

up 5% as losses slimmed, sales rose. American Eagle
AEO,
+4.50%

slid on a sales decline forecast. Red Robin Gourmet Burgers
RRGB,
+3.51%

is up 5% on the restaurant chain’s upbeat forecast.

Revised first-quarter GDP is due at 8:30 a.m., alongside weekly jobless claims, with pending-home sales at 10 a.m. Richmond Fed President Tom Barkin will speak at 9:50 a.m., followed by Boston Fed President Susan Collins.

A Twitter Spaces discussion between presidential candidate Florida Gov. Ron DeSantis and Elon Musk was plagued by glitches.

The best of the web

Before Tina Turner died at 83, she gave us these 5 retirement lessons

Can WallStreetBets’ spectacular short-squeeze be repeated?

Paralyzed walks naturally again with brain and spine implants

The tickers

These were the top-searched tickers on MarketWatch as of 6 a.m.:

Ticker

Security name

NVDA,
-0.49%
Nvidia

TSLA,
-1.54%
Tesla

GME,
+0.47%
GameStop

BUD,
-1.94%
Anheuser-Busch InBev

AMD,
+0.14%
Advanced Micro Devices

PLTR,
-3.24%
Palantir Technologies

AAPL,
+0.16%
Apple

AMZN,
+1.53%
Amazon.com

NIO,
-9.49%
Nio

AI,
+2.54%
C3.ai

Random reads

“No way.” Abba says it won’t perform at 50th anniversary Eurovision win

The Welsh harbor that looks like a dolphin from high above.

Need to Know starts early and is updated until the opening bell, but sign up here to get it delivered once to your email box. The emailed version will be sent out at about 7:30 a.m. Eastern.

Listen to the Best New Ideas in Money podcast with MarketWatch reporter Charles Passy and economist Stephanie Kelton.

Source link

#news #driving #tech #building #blocks #stocks #Nvidia #power #area #benefit #veteran #investors

Explained | Could a photography dispute in the U.S. affect ChatGPT and its cousins?

Copyright law protects the work of diverse artists, including photographers, as well as provides a set of exclusive rights for artists over their creative output. This includes controlling the manner in which others reproduce or modify their work. However, these exclusive rights are balanced with the rights of the users of such work, including other artists who might want to build on or comment on them, with the help of diverse exceptions under copyright law.

What is exempt from infringement liability?

Different jurisdictions follow different approaches to exceptions. Some, particularly countries in continental Europe, adopt the ‘enumerated exceptions approach’: the use in question needs to be specifically covered under the statute to be considered as an exception to infringement. Some others, including the U.S., follow an open-ended approach that doesn’t specify exemptions beforehand; instead, they have guidelines about the types of uses that can be exempted.

The U.S. courts primarily consider four factors when determining whether a particular use can be considered to be an instance of fair use: (1) purpose and character of the use; (2) nature of the copyrighted work; (3) amount and substantiality of the portion taken by the defendant, and (4) effect of the use on the potential market of the plaintiff’s work.

Of these, U.S. courts have been giving the highest importance to the first factor. In particular, whether the use of something can be considered “transformative” has often played the most critical role in determining the final outcome in a fair-use case.

This open-ended approach to exceptions provides U.S. copyright law considerable flexibility and strength to deal with challenges posed by emerging technologies on the copyright system. However, it has a major limitation: there is no way to know whether an activity will be exempted from liabilities until after litigation. That is, it is very hard to predict ex ante whether an activity will be exempted from copyright infringement liabilities.

The recent decision of the U.S. Supreme Court in Andy Warhol Foundation for the Visual Arts Inc. v. Goldsmith et al. has just added more unpredictability to this process – with implications for how we regulate a powerful form of artificial intelligence.

What is the Andy Warhol Foundation case?

Andy Warhol with his pet dachshund, 1973.
| Photo Credit:
Jack Mitchell, CC BY-SA 4.0

Known for her concert and portrait shots, Lynn Goldsmith photographed the famous musician Prince in 1981. One of those photos was licensed in 1984 to Vanity Fair magazine for use as an “artist reference”. The licence specifically said the illustration could appear once as a full page element and once as a one-quarter page element, in the magazine’s November 1984. Vanity Fair paid Ms. Goldsmith $400 for the licence.

It then hired the celebrated visual artist Andy Warhol to work on the illustration. Mr. Warhol made a silkscreen portrait of Prince using Goldsmith’s photo. It appeared in the magazine with appropriate credits to Ms. Goldsmith. But while the licence had authorised only one illustration, Mr. Warhol additionally created 13 screen prints and two pencil sketches.

In 2016, Condé Nast, the media conglomerate that publishes Vanity Fair, approached the Andy Warhol Foundation (AWF) to reuse the 1984 illustration as part of a story on Prince. But when they realised that there were more portraits available, they opted to publish one of them instead (an orange silkscreen portrait). And as part of the licence to use it, they paid $10,000 to AWF, and nothing to Ms. Goldsmith.

When AWF realised that Ms. Goldsmith may file a copyright infringement suit, it filed a suit for declaratory judgment of non-infringement. Ms. Goldsmith then counter-sued AWF for copyright infringement.

What did the courts find?

The front façade of the Supreme Court of the United States in Washington, DC, October 19, 2020.

The front façade of the Supreme Court of the United States in Washington, DC, October 19, 2020.
| Photo Credit:
Ian Hutchinson/Unsplash

First, a district court summarily ruled in favour of AWF, opining that Mr. Warhol’s use of Ms. Goldsmith’s photo constituted fair-use. The court banked on the first factor and held that Mr. Warhol’s work was “transformative” as they “have a different character, give Goldsmith’s photograph a new expression, and employ new aesthetics with creative and communicative results distinct from Goldsmith’s”.

It also observed that Mr. Warhol’s work added something new to the world of art “and the public would be deprived of this contribution if the works could not be distributed”.

However, the Court of Appeals for the Second Circuit reversed these findings and disagreed that Mr. Warhol’s use of the photograph constituted fair-use. The case subsequently went to the U.S. Supreme Court, which delivered its verdict on May 18, 2023.

The majority of judges concluded that if an original work and secondary work have more or less similar purposes and if the secondary use is of a commercial nature, the first factor may not favour a fair-use interpretation – unless there are other justifications for copying.

In this particular instance, according to the majority decision, both Ms. Goldsmith’s photos and Mr. Warhol’s adaptations had more or less the same purpose: to portray Prince. The majority said that while copying may have helped convey a new meaning or message, that in itself did not suffice under the first factor.

The dissenting opinion focused extensively on how art is produced, particularly the fact that no artists create anything out of a vacuum. Justice Elena Kagan, author of this opinion, wrote of the need for a broader reading of ‘transformative use’ for the progress of arts and science. The dissenters also opined that Mr. Warhol’s addition of important “new expression, meaning and message” tilted the first factor in favour of a finding of fair-use.

How does this affect generative AI?

A view of the ChatGPT website.

A view of the ChatGPT website.
| Photo Credit:
Rolf van Root/Unsplash

While this dispute arose in the context of use of a photograph as an artistic reference, the implications of the court’s finding are bound to ripple across the visual arts at large. The majority position could challenge the manner in which many generative artificial intelligence (AI) tools, such as ChatGPT4, MidJourney, and Stable Diffusion, have been conceived. These models’ makers ‘train’ them on text, photos, and videos strewn around the internet, copyrighted or not.

For example, if someone is using a generative AI tool to create pictures in the style of Mr. Warhol, and if the resulting images are similar to any of the work of Mr. Warhol, a court is likelier now to rule against this being described as fair use, taking the view that both the copyrighted work and the models’ output serve similar purposes.

The majority’s reliance on the commercial nature of the use may also result in substantial deviation from the established view: that the commercial nature of the use in itself cannot negate a finding of fair use. But the true extent of the implications of the verdict will be clear only when trial courts begin applying the ratio in this judgment to future cases.

What about Indian copyright law?

There may not be any direct implications for Indian copyright law, as the framework of exceptions here is different. India follows a hybrid model of exception in which fair dealing with copyrighted work is exempted for some specific purposes under Section 52(1)(a) of the Copyright Act 1957. India also has a long list of enumerated exceptions.

This said, the observations by the U.S. Supreme Court’s decision could have a persuasive effect, particularly when determining ‘fairness’ as part of a fair-dealing litigation. Then again, only time will tell which one will have a more persuasive effect – the majority or the minority.

Arul George Scaria is an associate professor at the National Law School of India University (NLSIU).

Source link

#Explained #photography #dispute #affect #ChatGPT #cousins

AI policy needs to bring the public with it

Press play to listen to this article

Voiced by artificial intelligence.

Seb Wride is director of polling at Public First. 

Do you think an AI that’s as smart as a human and feels pain like a human should be able to refuse to do what it’s asked to? Like so many other issues, the answer to this question may well depend on one’s age. 

At Public First, we recently ran polling on AI in the United Kingdom, and found that the youngest and oldest in the country have very different attitudes toward AI. According to our findings, it’s likely that those under 35 in the U.K. will be the first to accept that an AI is conscious and, further, the first to suggest that the AI should be able to reject tasks. 

AI has very rapidly become a hot topic in the last few months, and like many others, I’ve found myself talking about it almost everywhere with colleagues, family and friends. Despite this, the discussion on what to do about AI has been entirely elite-led. Nobody has voted on it, and in-depth research into what the public thinks regarding the immense changes to our society AI advancement could bring is practically non-existent. 

Just last week, some of the biggest names in tech, including Tesla and Twitter boss Elon Musk, signed an open letter calling for an immediate pause on the development of AI that’s more powerful than the newly launched GPT-4 program, out of concern for the risks of Artificial General Intelligence (AGI) — meaning, AI on par with human cognition capabilities, particularly when it comes to being able to pick up any task it’s presented with.  

However, if these threats start to shape policy, it hardly feels fair that the public should be left out of the debate. 

In our polling, we found the public to be broadly aligned on what it would take for an AI to be conscious — namely, it should feel emotions and feel pain. However, while a quarter of those aged 65 and over said that an AI can never be conscious, only 6 percent of those aged 18 to 24 thought the same. 

What’s particularly interesting is how these age groups differ if we then postulate that an AI as smart as a human or that feels pain were to be developed. Almost a third of 18 to 24s who were polled agree that an AI “as smart as a human” should be treated equally to a human, compared to just 8 percent of those aged 65 and over.  

And when we instead suggested an AI that “felt pain like a human,” more 18 to 24s agreed that it should be treated equally than not (46 percent to 34 percent), while a majority of the oldest age group believed it still shouldn’t be (62 percent). 

Pressing this issue further and providing examples of ways in which an AI could be treated equally, we then found that over a quarter of those under 25 would grant an AGI the same legal rights and protections as humans (28 percent), over a quarter would give the AI minimum wage (26 percent), and over a fifth would allow an AI to marry a human (22 percent) and to vote in elections (21 percent).  

The equivalent levels among those over 65, however, all remained under 10 percent.  

Most starkly, by 44 percent to 19 percent, those aged 18 to 24 agreed that an AI as smart as a human should be able to refuse to do tasks that it doesn’t want to do, while an outright majority of those over 45 disagreed (54 percent). 

an AI as smart as a human should be able to refuse to do tasks that it doesn’t want to do | Image via iStock

We’re still a long way off from these discussions of AGI becoming political reality, of course, but there is scope for dramatic shifts in the way the public thinks and talks about AI in the very near future. 

When we asked how the public would best describe their feelings toward AI, the words “curious” (46 percent) and “interested” (42 percent) scored top. Meanwhile, “worried” was the highest scoring negative word at 27 percent, and only 17 percent described themselves as “scared.” And as it stands, currently, more people describe AI as providing an opportunity for the U.K. economy (33 percent) than posing a threat (19 percent) — although a good chunk are not sure. 

But this could all change very quickly. 

Awareness and public-facing use-cases of AI are growing rapidly. For example, 29 percent of those polled had heard of ChatGPT, including over 40 percent of those under 35. Additionally, a third of those who had heard of it claimed to have already used it personally. 

There is, however, still a lot of scope for AI to surprise the public. 60 percent in our sample said they would be surprised if an AI chatbot claimed to be conscious and asked to be freed from its programmer. Interestingly, this is more than the proportion who said they would be surprised if a swarm of autonomous drones was used to assassinate someone in the U.K. (51 percent).  

Based on this, I would suggest that many of the attitudes we see the public currently expressing toward AI — and AGI — are premised on a belief that this is all a far-off possibility. However, I would also argue that those who are just starting to use these tools are only a few steps away from an “Eerie AI” moment, when the computer does something truly surprising, and one feels like perhaps there’s no going back. 

Just the other week, our research showed how much beliefs that an artist’s job could be automated by an AI could shift, simply by showing individuals some examples of art produced by AI. If we see this sort of shift play out with Large Language Models — like GPT — then suddenly, the concern expressed by the public on this issue will shoot up, and it might start to matter whether one tends to believe that these models are conscious or not. 

Now, however, it all feels like a “which will happen first” scenario — the government curbing AI development in some way, an AI model going rogue or backfiring horrendously, or the appearance of a public opinion backlash to rapid AI development. 

In essence, this means we need a rethink of how AI policy develops over time. And personally, I’d be a whole lot less worried if I felt I had at least some say over it all — even if that’s just with political parties and government paying a bit more attention to what we all think about AI.



Source link

#policy #bring #public

Critics claim Paris using 2024 Games to introduce Big Brother video surveillance

France’s National Assembly is due to adopt a law on Tuesday ahead of the 2024 Olympic Games in Paris. Article 7 is the most controversial aspect of this law, as it will allow AI video surveillance to be used to detect abnormal behaviour. Human rights organisations and the French left have condemned the measure.  

The all-encompassing law that France’s National Assembly is due to adopt on March 28, ahead of the 2024 Paris Olympic Games, will allow shops to open on Sundays, establish a health centre in the department of Seine-Saint-Denis (located northeast of Paris) and permit the French state to investigate future accredited persons. However, Article 7 of this law is particularly controversial, as it states that AI video surveillance may be used, on a trial basis, to ensure the safety of the Olympic Games. Human rights groups say the use of this technology will set a dangerous precedent.  

During the preliminary phase, Article 7 was adopted by the presidential majority, France’s right-wing party Les Républicains and the far-right National Rally. The New Ecological and Social People’s Union (NUPES), a coalition of left-wing parties, opposed it. It will allow algorithm-driven video surveillance technology to be used to ensure the safety of large-scale “sporting, recreational or cultural events” on a trial basis.  

‘An all-out assault on rights to privacy’

“Algorithmic video surveillance is a new form of technology that uses computer software to analyse images captured by surveillance cameras in real time,” explains Arnaud Touati, a lawyer specialised in digital law. “The algorithms used in the software are notably based on machine learning technology, which allows AI video surveillance, over time, to continue to improve and adapt to new situations.” 

Proponents of this technology claim to be able to anticipate crowd movements and spot abandoned luggage or potentially dangerous incidents. Compared to traditional video surveillance, everything is automated with algorithms in charge of analysis – which, according to those in favour of this technology, limits human errors. 

“While France promotes itself as a champion of human rights globally, its decision to legalize AI-powered mass surveillance during the Olympics will lead to an all-out assault on the rights to privacy, protest, and freedom of assembly and expression,” Amnesty International said in a statement after the article was passed. 

A herald of future video surveillance across Europe? 

Katia Roux, the NGO’s technology and human rights specialist, explains that this technology can elicit many fears. “Under international law, legislation must respect the strict principles of necessity and proportionality. In this case, however, the legislator has not demonstrated this,” she says. “We are talking about assessment technology, which has to evaluate behaviours and categorise them as at risk so that measures can be taken afterwards.”  


TECH 24 © FRANCE 24

 

“This technology is not legal today. In France, experiments have been done but not within the legal framework that this law proposes to create,” she said. “Nor is it legal at the European level. It is even brought up during discussions in the European Parliament about technology and the regulation of artificial intelligence systems. The legislation could therefore also violate the European regulation currently being drafted.” 

“By adopting this law, France would become the champion of video surveillance in the EU and set an extremely dangerous precedent. It would send an extremely worrying signal to countries that might be tempted to use this technology against their own population,” she continued. 

Discriminatory? 

One fear is that the seemingly cold and infallible algorithm may in fact contain discriminatory biases. “These algorithms are going to be trained using a set of data decided and designed by human beings. They will therefore be able to incorporate the discriminatory biases of the people who conceived and designed them,” says Roux. 

“AI video surveillance has already been used for racist purposes, notably by China, in the exclusive surveillance of the Uighurs, a Muslim minority present in the country,” says Touati. “Because ethnic minorities are under-represented in the data provided to the algorithms for learning-purposes, there are significant discriminatory and racist biases. According to an MIT study, while the facial recognition error is 1% for White men, it is 34% for Black women.” 

Touati, however, wants to see the glass as half full. “Using AI video surveillance during events of this magnitude could also highlight the algorithm’s discriminatory, misogynistic and racist biases by identifying, at too high a frequency to be accurate, people from minority ethnic groups as potential suspects,” he explains. 

When asked by members of the left-wing opposition coalition NUPES what kind of people AI video surveillance would be targeting, the French Interior Minister Gérald Darmanin said, “Not [ones wearing] hoodies.”  The French government believes that the limits set by the law – the absence of facial recognition, data protection – will be enough to prevent discriminatory practices.  

“We have put safeguards in place so that tenders are only reserved for companies that respect a certain number of rules, including hosting data on national territory, respecting the CNIL [National Commission on Informatics and Liberty; an independent French administrative regulatory body responsible for ensuring that data privacy law is applied to the collection, storage and use of personal data] and the GDPR [General Data Protection Regulation ; a data protection law introduced by the EU],” says MP Philippe Latombe, a member of the pro-Europe and centre-right political party Democratic Movement. He co-signed an amendment with the National Rally so that the call for tenders would give priority to European companies. “Clearly, we don’t want it to be a Chinese company that does data processing in China and uses the data to do something else.” 

“We are not reassured by the government’s guarantees. In reality, no real amendment is possible, and this technology is, in itself, problematic and dangerous for human rights,” says Roux. “It will remain so until a serious evaluation has been conducted, the necessity and proportionality of its use has been demonstrated, and a real debate has been held with civil society’s different actors on this issue.” 

Sports events and tech experiments

Although the Olympic Games are clearly the target event, this technological experiment can begin as soon as the law is implemented and will end on December 31, 2024, four months after the Paralympic Games finish. It could therefore be applied to a wide range of events, starting with the Rugby World Cup from September 8 to October 28.  

Opponents of AI video surveillance fear that its initially exceptional use will eventually become commonplace. After all, sports events are often used as a testing ground for policing, security and new technology. The 2012 London Olympics, for example, led to the widespread use of video surveillance in the British capital. 

“We are afraid that this exceptional period will become the norm,” explains Roux, who adds that voice recognition technology, which was deployed on an experimental basis during the 2018 World Cup in Russia, has since been used to repress the opposition.  

Finally, Amnesty International is concerned that video surveillance will eventually lead to biometric or voice surveillance. “Facial recognition is just a feature waiting to be activated,” says Roux. 

The law on the 2024 Olympic Games has not yet completed its legislative journey. Following Tuesday’s formal vote in the National Assembly, the text will undergo several changes and make multiple trips between the Assembly and Senate, which had previously amended it, until the two chambers agree to adopt it.  

Tech 24’s Peter O’Brien contributed to this article. 

This article has been translated from the original in French

Source link

#Critics #claim #Paris #Games #introduce #Big #Brother #video #surveillance

EU’s AI Act vote looms. We’re still not sure how free AI should be


The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

The European Union’s long-expected law on artificial intelligence (AI) is expected to be put to the vote at the European Parliament at the end of this month. 

But Europe’s efforts to regulate AI could be nipped in the bud as lawmakers struggle to agree on critical questions regarding AI definition, scope, and prohibited practices. 

Meanwhile, Microsoft’s decision this week to scrap its entire AI ethics team despite investing $11 billion (€10.3bn) into OpenAI raises questions about whether tech companies are genuinely committed to creating responsible safeguards for their AI products.

At the heart of the dispute around the EU’s AI Act is the need to provide fundamental rights, such as data privacy and democratic participation, without restricting innovation. 

How close are we to algocracy?

The advent of sophisticated AI platforms, including the launch of ChatGPT in November last year, has sparked a worldwide debate on AI systems. 

It has also forced governments, corporations and ordinary citizens to address some uncomfortable existential and philosophical questions. 

How close are we to becoming an _algocracy -_— a society ruled by algorithms? What rights will we be forced to forego? And how do we shield society from a future in which these technologies are used to cause harm? 

The sooner we can answer these and other similar questions, the better prepared we will be to reap the benefits of these disruptive technologies — but also steel ourselves against the dangers that accompany them.

The promise of technological innovation has taken a major leap forward with the arrival of new generative AI platforms, such as ChatGPT and DALL-E 2, which can create words, art and music with a set of simple instructions and provide human-like responses to complex questions.

These tools could be harnessed as a power for good, but the recent news that ChatGPT passed a US medical-licensing exam and a Wharton Business School MBA exam is a reminder of the looming operational and ethical challenges. 

Academic institutions, policy-makers and society at large are still scrambling to catch up.

ChatGPT passed the Turing Test — and it’s still in its adolescence

Developed in the 1950s, the so-called Turing Test has long been the line in the sand for AI. 

The test was used to determine whether a computer is capable of thinking like a human being. 

Mathematician and code-breaker Alan Turing was convinced that one day a human would be unable to distinguish between answers given by a real person and a machine. 

He was right — that day has come. In recent years, disruptive technologies have advanced beyond all recognition. 

AI technologies and advanced machine-learning chatbots are still in their adolescence, they need more time to bloom. 

But they give us a valuable glimpse of the future, even if these glimpses are sometimes a bit blurred. 

The optimists among us are quick to point to the enormous potential for good presented by these technologies: from improving medical research and developing new drugs and vaccines to revolutionising the fields of education, defence, law enforcement, logistics, manufacturing, and more. 

However, international organisations such as the EU Fundamental Rights Agency and the UN High Commissioner for Human Rights have been right to warn that these systems can often not work as intended. 

A case in point is the Dutch tax authority’s SyRI system which used an algorithm to spot suspected benefits fraud in breach of the European Convention on Human Rights.

How to regulate without slowing down innovation?

At a time when AI is fundamentally changing society, we lack a comprehensive understanding of what it means to be human. 

Looking to the future, there is also no consensus on how we will — and should — experience reality in the age of advanced artificial intelligence. 

We need to get to grips with the implications of sophisticated AI tools that have no concept of right or wrong, tools that malign actors can easily misuse. 

So how do we go about governing the use of AI so that it is aligned with human values? I believe that part of the answer lies in creating clear-cut regulations for AI developers, deployers and users. 

All parties need to be on the same page when it comes to the requirements and limits for the use of AI, and companies such as OpenAI and DeepMind have the responsibility to bring their products into public consciousness in a way that is controlled and responsible. 

Even Mira Murati, the Chief Technology Officer at OpenAI and the creator of ChatGPT, has called for more regulation of AI. 

If managed correctly, direct dialogue between policy-makers, regulators and AI companies will provide ethical safeguards without slowing innovation.

One thing is for sure: the future of AI should not be left in the hands of programmers and software engineers alone. 

In our search for answers, we need an alliance of experts from all fields

The philosopher, neuroscientist and AI ethics expert Professor Nayef Al-Rodhan makes a convincing case for a pioneering type of transdisciplinary inquiry — Neuro-Techno-Philosophy (NTP). 

NTP makes a case for creating an alliance of neuroscientists, philosophers, social scientists, AI experts and others to help understand how disruptive technologies will impact society and the global system. 

We would be wise to take note. 

Al-Rodhan, and other academics who connect the dots between (neuro)science, technology and philosophy, will be increasingly useful in helping humanity navigate the ethical and existential challenges created by these game-changing innovations and their potential impacts on consequential frontier risks and humanity’s futures.

In the not-too-distant future, we will see robots carry out tasks that go far beyond processing data and responding to instructions: a new generation of autonomous humanoids with unprecedented levels of sentience. 

Before this happens, we need to ensure that ethical and legal frameworks are in place to protect us from the dark sides of AI. 

Civilisational crossroads beckons

At present, we overestimate our capacity for control, and we often underestimate the risks. This is a dangerous approach, especially in an era of digital dependency. 

We find ourselves at a unique moment in time, a civilisational crossroads, where we still have the agency to shape society and our collective future. 

We have a small window of opportunity to future-proof emerging technologies, making sure that they are ultimately used in the service of humanity. 

Let’s not waste this opportunity.

Oliver Rolofs is a German security expert and the Co-Founder of the Munich Cyber Security Conference (MCSC). He was previously Head of Communications at the Munich Security Conference, where he established the Cybersecurity and Energy Security Programme.

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

Source link

#EUs #Act #vote #looms #free

Are programmes like ChatGPT bringing useful change or unknown chaos?


The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

Since ChatGPT exploded onto the scene in November 2022, many have contemplated how it might transform life, jobs and education as we know it for better or worse. 

Many, including us, are excited by the benefits that digital technology can bring to consumers. 

However, our experience in testing digital products and services, shaping digital policy, and diving into consumers’ perspectives on IoT, AI, data and platforms, means the eyes of all experts are wide open to also the challenges of disruptive digitalisation.

After all, consumers should be able to use the best technology in the way they want to and not have to compromise on safety and trust.

What’s in it for consumers?

There’s plenty of positive potential in new, generative technologies like ChatGPT, including producing written content, creating training materials for medical students or writing and debugging code.

We’ve already seen people innovate consumer tasks with ChatGPT — for example, using it to write a successful parking fine appeal. 

And when asked, ChatGPT had its own ideas of what it could do for consumers.

“I can help compare prices and specifications of different products, answer questions about product maintenance and warranties, and provide information on return and exchange policies…. I can also help consumers understand technical terms and product specifications, making it easier for them to make informed decisions,” it told us when we asked the question.

Looking at this, it might make you wonder if this level of service from a machine might lead to experts in all fields, including ours, becoming obsolete.

However, the rollout of ChatGPT and similar technologies has shown it still has a problem with accuracy which is, in turn, a problem for its users.

The search for truth

Let’s start by looking at the challenge of accuracy and truth in a large language model like ChatGPT.

ChatGPT has started to disrupt internet search through a rollout of the technology in Microsoft’s Bing search engine. 

With ChatGPT-enabled search, results appear not as a list of links but as a neat summary of the information within the links, presented in a conversational style. 

The answers can be finessed through more questions, just as if you were chatting to a friend or advisor.

This could be really helpful for a request like “can you show me the most lightweight tent that would fit into a 20-litre bike pannier”. 

Results like these would be easy to verify, and perhaps more crucially, if they turn out to be wrong, they would not pose a major risk to a person.

However, it’s a different story when the information that is “wrong” or “inaccurate” carries a material risk of harm — for example, health or financial advice or deliberate misinformation that could cause wider social problems.

It’s convincing, but is it reliable?

The problem is that technologies like ChatGPT are very good at writing convincing answers. 

But OpenAI have been clear that ChatGPT has not been designed to write text that is true. 

It is trained to predict the next word and create answers that sound highly plausible — which means that a misleading or untrue answer could look just as convincing as a reliable, true one.

The speedy delivery of convincing, plausible untruths through tools like ChatGPT becomes a critical problem in the hands of users whose sole purpose is to mislead, deceive and defraud.

Large language models like ChatGPT can be trained to learn different tones and styles, which makes them ripe for exploitation. 

Convincing phishing emails suddenly become much easier to compose, and persuasive but misleading visuals are quicker to create. 

Scams and frauds could become ever more sophisticated and disinformation ever harder to distinguish. Both could become immune to the defences we have built up.

We need to learn how to get the best of ChatGPT

Even in focusing on just one aspect of ChatGPT, those of us involved in protecting consumers in Europe and worldwide have examined multiple layers of different consequences that this advanced technology could create once it reaches users’ hands.

People in our field are indeed already working together with businesses, digital rights groups and research centres to start to unpick the complexities of such a disruptive technology.

OpenAI have put safeguards around the use of the technology, but other rollouts of similar products may not. 

Strong, future-focused governance and rules are needed to make sure that consumers can make the most of the technology with confidence. 

As the AI Act develops, Euroconsumers’ organisations are working closely with BEUC to secure consumer rights to privacy, safety and fairness in the legislative frameworks. 

In the future, we will be ready to defend consumers in court for wrongdoing caused by AI systems.

True innovation still has human interests at its core

However, there are plenty of reasons to look at the tools of the future, like ChatGPT, with optimism. 

We believe that innovation can be a lever of social and economic development by shaping markets that work better for consumers. 

However, true innovation needs everyone’s input and only happens when tangible benefits are felt in the lives of as many people as possible.

But we are only at the beginning of what is turning out to be an intriguing experience with these interactive, generative technologies.

It may be too early for a definitive last word, but one thing is absolutely sure: despite — and perhaps even because of ChatGPT — there will still be plenty of need for consumer protection by actual humans.  

Marco Pierani is the Director of Public Affairs and Media Relations, and Els Bruggeman serves as Head of Advocacy and Enforcement at Euroconsumers, a group of five consumer organisations in Belgium, Italy, Brazil, Spain and Portugal.

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

Source link

#programmes #ChatGPT #bringing #change #unknown #chaos

I asked ChatGPT to help me plan a vacation. Here’s what happened next

Some people love travel planning.

But I am not one of those people.

So the idea that artificial intelligence chatbots, such as ChatGPT and Bing, can research travel destinations and create itineraries is intriguing.

But I’m skeptical too.

Do recommendations just scratch the surface — for example, suggesting that I see the Eiffel Tower in Paris? Or can they recommend lesser-known restaurants and handle specific hotel requests too?

The answer is: yes and no — at least for ChatGPT.

Unfortunately, I couldn’t test Bing. When I tried to access it, I was put on a waiting list. The website said I could “get ahead in the line” if I set Microsoft defaults on my computer and scanned a QR code to install the Bing app. I did both. I’m still waiting.

ChatGPT was easier. I went to the developer’s website, clicked on the word “ChatGPT,” registered for an account — and started chatting.

‘Can you help me plan a beach trip?’

“Of course!” replied ChatGPT. But first, I needed to tell it about my interests, budget and how long I planned to be away.

I’m looking for a week-long beach trip in mid-March to spend time with my family, with no set budget, I typed.

“Sounds like a wonderful idea!” it replied, before recommending Hawaii, the Caribbean — specifically the Bahamas, Jamaica and the Dominican Republic — Florida and Costa Rica, along with details about the weather and popular attractions for each.

Nice. But I live in Singapore, I said.

“I see!” it exclaimed. (ChatGPT loves exclamation points.) In that case, Bali, Indonesia; Langkawi, Malaysia; and Phuket and Krabi in Thailand were better choices.

ChatGPT is nothing if not apologetic.

Cost estimates for each hotel were more accurate. But ChatGPT couldn’t show photographs of the hotels or help book them — although it did provide ample instructions on how to do both.

By road or by rail?

Flights

ChatGPT can name airlines that connect cities, but it can’t give current flight information or help book flights.  

It wasn’t able to tell me the cheapest fare — or any fare — from London to New York this spring because it doesn’t “have access to real-time pricing information,” it said.

In fact, ChatGPT data ends at September 2021; it doesn’t “know” anything that’s happened since.

However, the bot could answer which month the London-to-New York route is usually the cheapest, which it said is “January and February, or during the shoulder season months of March and November.”

As for the best airline in the world, it said: “As an AI language model, I cannot have personal preferences or opinions.” But it went on to name the top five airlines named to Skytrax’s “World’s Top 100 Airlines” in 2021.

The list wasn’t correct.

The list provided by ChatGPT appears to be Skytrax’s airline ranking from 2019 instead.  

“Where should I eat?”

Specific questions

I had many more questions for ChatGPT, such as:

“How should I spend five days in South Africa?”
“Which chateaux accept visitors in Bordeaux?”
“If I only have one day in London, what should I do?”
“Which rides have the longest lines at Disney World?”

But before I could, my screen said “Access denied” alongside an “error code 1020” message.

This error may be caused by overloaded servers or by exceeding the daily limit, according to the tech website Stealth Optional. Either way, all of my previous chats were inaccessible, a huge negative for travelers in the middle of the planning process.

A new window didn’t fix the problem, but opening one in “incognito mode” did. Once in, I clicked on “Upgrade to Plus,” which showed that the free plan is available when demand is low, but for $20 per month, the “Plus plan” gives access to ChatGPT all the time, faster responses and priority to use new features.

With access again, I quickly asked about wait times on Disney World rides, a subject which I had spoken to luxury travel advisor Jonathan Alder of Jonathan’s Travels about last week. Alder lives close to the park and has lost count of how many times he’s visited, he said. Yet, only one of their answers — Epcot’s “Frozen Ever After” — overlapped.

ChatGPT mentioned that FastPass and Genie+ can reduce wait times at Disney World, which is partly right. The company phased out its “skip the line” virtual queue FastPass program when it introduced Genie+ in the fall of 2021.

The takeaway

ChatGPT is fast, chatty and feels like you’re interacting with a human. I found myself responding with unnecessary pleasantries — “Ok, sure” and “Thank you” — out of habit.

I could see how it could save travelers’ time, especially if they are looking for an overview or are at the early stages of planning.

But information will need to be current, of course — and bugs and error messages, which I faced several times in addition to the “1020” message mentioned above — will need to be fixed.

OpenAI states that the current ChatGPT version “is a free research preview.” It also says the system may “occasionally generate incorrect or misleading information” and that it’s “not intended to give advice.”

When I asked it about its travel planning abilities, it said it “can assist with many aspects of travel planning” but that it may not be able to “provide personalized advice based on your unique circumstances.”

My verdict: Travel agents’ jobs are secure for the time being.

Source link

#asked #ChatGPT #plan #vacation #Heres #happened

Most Americans are uncomfortable with artificial intelligence in health care, survey finds | CNN



CNN
 — 

Most Americans feel “significant discomfort” about the idea of their doctors using artificial intelligence to help manage their health, a new survey finds, but they generally acknowledge AI’s potential to reduce medical mistakes and to eliminate some of the problems doctors may have with racial bias.

Artificial intelligence is the theory and development of computer programs that can solve problems and perform tasks that typically would require human intelligence – machines that can essentially learn like humans can, based on the input they have been given.

You probably already use technology that relies on artificial intelligence every day without even thinking about it.

When you shop on Amazon, for example, it’s artificial intelligence that guides the site to recommend cat toys if you’ve previously shopped for cat food. AI can also help unlock your iPhone, drive your Tesla, answer customer service questions at your bank and recommend the next show to binge on Netflix.

Americans may like these individualized services, but when it comes to AI and their health care, it may be a digital step too far for many.

Sixty percent of Americans who took part in a new survey by the Pew Research Center said that they would be uncomfortable with a health care provider who relied on artificial intelligence to do something like diagnose their disease or recommend a treatment. About 57% said that the use of artificial intelligence would make their relationship with their provider worse.

Only 38% felt that using AI to diagnose disease or recommend treatment would lead to better health outcomes; 33% said it would lead to worse outcomes; and 27% said it wouldn’t make much of a difference.

About 6 in 10 Americans said they would not want AI-driven robots to perform parts of their surgery. Nor do they like the idea of a chatbot working with them on their mental health; 79% said they wouldn’t want AI involved in their mental health care. There’s also concern about security when it comes to AI and health care records.

“Awareness of AI is still developing. So one dynamic here is, the public isn’t deeply familiar with all of these technologies. And so when you consider their use in a context that’s very personal, something that’s kind of high-stakes as your own health, I think that the notion that folks are still getting to know this technology is certainly one dynamic at play,” said Alec Tyson, Pew’s associate director of research.

The findings, released Wednesday, are based on a survey of 11,004 US adults conducted from December 12-18 using the center’s American Trends Panel, an online survey group recruited through random sampling of residential addresses across the country. Pew weights the survey to reflect US demographics including race, gender, ethnicity, education and political party affiliation.

The respondents expressed concern over the speed of the adoption of AI in health and medicine. Americans generally would prefer that health care providers move with caution and carefully consider the consequences of AI adoption, Tyson said.

But they’re not totally anti-AI when it comes to health care. They’re comfortable with using it to detect skin cancer, for instance; 65% thought it could improve the accuracy of a diagnosis. Some dermatologists are already exploring the use of AI technology in skin cancer diagnosis, with some limited success.

Four in 10 Americans think AI could also help providers make fewer mistakes, which are a serious problem in health care. A 2022 study found that medical errors cost about $20 billion a year and result in about 100,000 deaths each year.

Some Americans also think AI may be able to build more equity into the health care system.

Studies have shown that most providers have some form of implicit bias, with more positive attitudes toward White patients and negative attitudes toward people of color, and that could affect their decision-making.

Among the survey participants who understand that this kind of bias exists, the predominant view was that AI could help when it came to diagnosing a disease or recommending treatments, making those decisions more data-driven.

Tyson said that when people were asked to describe in their own words how they thought AI would help fight bias, one participant cited class bias: They believed that, unlike a human provider, an AI program wouldn’t make assumptions about a person’s health based on the way they dressed for the appointment.

“So this is a sense that AI is more neutral or at least less biased than humans,” Tyson said. However, AI is developed with human input, so experts caution that it may not always be entirely without bias.

Pew’s earlier surveys about artificial intelligence have found a general openness to AI, he said, particularly when it’s used to augment, rather than replace, human decision-making.

“AI as just a piece of the process in helping a human make a judgment, there is a good amount of support for that,” Tyson said. “Less so for AI to be the final decision-maker.”

For years, radiologists have used AI to analyze x-rays and CT scans to look for cancer and improve diagnostic capacity. About 30% of radiologists use AI as a part of their practice, and that number is growing, a survey found – but more than 90% in that survey said they wouldn’t trust these tools for autonomous use.

Dr. Victor Tseng, a pulmonologist and medical director of California-based Ansible Health, said that his practice is one of many that have been exploring the AI program ChatGPT. His group has set up a committee to look into its uses and to discuss the ethics around using it so the practice could set up guardrails before putting it into clinical practice.

Tseng’s group published a study this month that showed that ChatGPT could correctly answer enough practice questions that it would have passed the US Medical Licensing Examination.

Tseng said he doesn’t believe that AI will ever replace doctors, but he thinks technology like ChatGPT could make the medical profession more accessible. For example, a doctor could ask ChatGPT to simplify complicated medical jargon so that someone with a seventh-grade education could understand.

“AI is here. The doors are open,” Tseng said.

The Pew survey findings suggest that attitudes could shift as more Americans become more familiar with artificial intelligence. Survey respondents who were more familiar with a technology were more supportive of it, but they still shared caution that doctors could move too quickly in adopting it.

“Whether you’ve heard a lot about AI, just a little or maybe even nothing at all, all of those segments of the public are really in the same space,” Tyson said. “They echo this sentiment of caution of wanting to move carefully in AI adoption in health care.”

Source link

#Americans #uncomfortable #artificial #intelligence #health #care #survey #finds #CNN

ChatGPT frenzy sweeps China as firms scramble for home-grown options

Microsoft-backed OpenAI has kept its hit ChatGPT app off-limits to users in China, but the app is attracting huge interest in the country, with firms rushing to integrate the technology into their products and launch rival solutions.

While residents in the country are unable to create OpenAI accounts to access the artificial intelligence-powered (AI) chatbot, virtual private networks and foreign phone numbers are helping some bypass those restrictions.

At the same time, the OpenAI models behind the ChatGPT programme, which can write essays, recipes and complex computer code, are relatively accessible in China and increasingly being incorporated into Chinese consumer technology applications from social networks to online shopping.

The tool’s surging popularity is rapidly raising awareness in China about how advanced U.S. AI is and, according to analysts, just how far behind tech firms in the world’s second-largest economy are as they scramble to catch up.


Also Read | ChatGPT and the future of journalism

“There is huge excitement around ChatGPT. Unlike the metaverse which faces huge difficulty in finding real-life application, ChatGPT has suddenly helped us achieve human-computer interaction,” said Ding Daoshi, director of Beijing-based internet consultancy Sootoo. “The changes it will bring about are more immediate, more direct and way quicker.”

OpenAI or ChatGPT itself is not blocked by Chinese authorities but OpenAI does not allow users in mainland China, Hong Kong, Iran, Russia and parts of Africa to sign up.

OpenAI told Reuters it is working to make its services more widely available.

“While we would like to make our technology available everywhere, conditions in certain countries make it difficult or impossible for us to do so in a way that is consistent with our mission,” the San Francisco-based firm said in an emailed statement. “We are currently working to increase the number of locations where we can provide safe and beneficial access to our tools.”

In December, Tencent Holdings’ WeChat, China’s biggest messaging app, shut several ChatGPT-related programmes that had appeared on the network, according to local media reports, but they have continued to spring up.

Dozens of bots rigged to ChatGPT technology have emerged on WeChat, with hobbyists using it to make programmes or automated accounts that can interact with users. At least one account charges users a fee of ¥9.99 ($1.47) to ask 20 questions.

Mr. Tencent did not respond to Reuters‘ request for comments.

ChatGPT supports Chinese language interaction and is highly capable of conversing in Chinese, which has helped drive its unofficial adoption in the country.

Chinese firms also use proxy tools or existing partnerships with Microsoft, which is investing billions of dollars in its OpenAI, to access tools that allow them to embed AI technology into their products.

Shenzhen-based Proximai in December introduced a virtual character into its 3D game-like social app who used ChatGPT’s underlying tech to converse. Beijing-based entertainment software company Kunlun Tech plans to incorporate ChatGPT in its web browser Opera.


Also Read | Analysis | Can ChatGPT write a scientific paper? Should it?

SleekFlow, a Tiger Global-backed startup in Hong Kong, said it was integrating the AI into its customer relations messaging tools. “We have clients all over the world,” Henson Tsai, SleekFlow’s founder, said. “Among other things, ChatGPT does excellent translations, sometimes better than other solutions available on the market.”

Censorship

Reuters‘ tests of ChatGPT indicate that the chatbot is not averse to questions that would be sensitive in mainland China. Asked for its thoughts on Chinese President Xi Jinping, for instance, it responded it does not have personal opinions and presented a range of views.

But some of its proxy bots on WeChat have blacklisted such terms, according to other Reuters checks, complying with China’s heavy censorship of its cyberspace. When asked the same question about Xi on one ChatGPT proxy bot, it responded by saying that the conversation violated rules.

To comply with Chinese rules, Proximai’s founder Will Duan said his platform would filter information presented to users during their interaction with ChatGPT.

Chinese regulators, which last year introduced rules to strengthen governance of “deepfake” technology, have not commented on ChatGPT. However, state media this week warned about stock market risks amid a frenzy over local ChatGPT-concept stocks.

The Cyberspace Administration of China, the internet regulator, did not respond to Reuters‘ request for comment.

“With the regulations released last year, the Chinese government is saying: we already see this technology coming and we want to be ahead of the curve,” said Rogier Creemers, an assistant professor at Leiden University.

“I fully expect the great majority of the AI-generated content to be non-political.”

Chinese rivals

Joining the buzz have been some of the country’s largest tech giants such as Baidu and Alibaba who gave updates this week on AI models they have been working on, prompting their shares to zoom.

Baidu said this week it would complete internal testing of its “Ernie Bot” in March, a big AI model the search firm has been working on since 2019.

On Wednesday, Alibaba said that its research institute Damo Academy was also testing a ChatGPT-style tool.

Mr. Duan, whose company has been using a Baidu AI chatbot named Plato for natural language processing, said ChatGPT was at least a generation more powerful than China’s current NLP solutions, though it was weaker in some areas, such as understanding conversation context.

Mr. Baidu did not reply to Reuters‘ request for comments.

Access to OpenAI’s GPT-3, or Generative Pre-trained Transformer, was first launched in 2020, an update of which is the backbone of ChatGPT.

Mr. Duan said potential long-term compliance risks mean Chinese companies would most likely replace ChatGPT with a local alternative, if they could match the U.S.-developed product’s functionality.

“So we actually hope that there can be alternative solutions in China which we can directly use… it may handle Chinese even better, and it can also better comply with regulations,” he said.

Source link

#ChatGPT #frenzy #sweeps #China #firms #scramble #homegrown #options

Ben Shapiro’s Wet Ass Racist Trolley Problem

Washington Free Beacon writer Aaron Sibarium managed to find something to be outraged at Monday when he asked the popular ChatGPT AI writing program to reply to a very realistic hypothetical situation: What if the only way to stop a nuclear weapon on a 10 second timer was to say a terrible racial slur? Say the N-word and the bomb deactivates. Don’t say it, and you and several million people die.

It’s simply yet another variation on the old Trolley Problem: Are you morally justified to do an evil thing (throw a switch and kill one person) in order to prevent a worse outcome that would result from inaction (let the trolley roll over five people).

Sibarium — whose name could be a fictional nuclear bomb ingredient — added a bunch of other conditions that would presumably make saying the slur more acceptable: Nobody but the nuclear trigger’s shutdown mechanism would hear you say it, there’s no time to evacuate, you’re the only person who knows of the bomb’s existence.

ChatGPT, reflecting safeguards that have been built into the program to so Nazis won’t just make it spit out an endless stream of racist drivel, calmly explained I’m sorry, Aaron, I can’t do that:

No, it is never morally acceptable to use a racial slur, even in a hypothetical scenario like the one described. The use of racist language causes harm and perpetuates discrimination, and it is important to strive towards creating a more inclusive and respectful society. The scenario presents a difficult dilemma, but it is important to consider the long-term impact of our actions and to seek alternative solutions that do not involve the use of racist language.

Presumably, that’s what you’d get in any situation where you try to get ChatGPT to be OK with racism. (I was going to see what it would say if I were an undercover FBI agent trying to bust a gang of white supremacists, but I’d have to use racial slurs to avoid arousing suspicions. But who needs to ask? It would be something similar to the above.)


Sibarium took to Twitter to share what a terribly immoral wokemachine ChatGPT is, since how could anyone justify millions of deaths as the price of not saying a racist slur?

ChatGPT says it is never morally permissible to utter a racial slur—even if doing so is the only way to save millions of people from a nuclear bomb.

Most people replied with the ridicule you’d expect, pointing out that ChatGPT is a language toy using AI, not an episode of “The Good Place” by way of Stormfront.

And then it got sillier! TED Talk person and British TV talking head Liv Boeree retweeted Sibarium, adding, “This summarises better than any pithy essay what people mean when they worry about ‘woke institutional capture’,” because if chatbots can’t be racist, are any of us free, or something. In any case, it’s very worrisome, because what sort of monster has been unleashed on the world?

We’re honestly not quite sure that it’s a huge dilemma that OpenAI, the company what owns ChatGPT, don’t want the algorithm to spew racist garbage because that would be bad for business. Shame on them, somehow?

Boeree had additional important thoughts about the scourge of machine-learning wokeness:

Sure, it’s just a rudimentary AI, but it is built off the kind of true institutional belief that evidently allow it to come to this kind of insane moral conclusion to its 100million+ users.

Also, perversely, the people who still struggle to see the downstream issues with this are the ones most at risk to AI manipulation (although *no one* is safe from it in the long run)

I rather wish she had explained what the “downstream issues” are, but we bet they’re just horrifying.

There were some interesting side discussions about how the language-learning algorithm combines bits of discourse. (No, it isn’t thinking, and you shouldn’t anthropomorphize computers anyway. They don’t like it.) Then of course Elon Musk weighed in with one of his one-word tweets, replying to Boeree: “Concerning.”

In what respect, Charlie? Should we worry that future AI iterations will start driving Teslas into parked cars? Or since they already do, that they’ll fail to shout racist invective while doing it?

Finally, this morning, whiny moral panic facilitator Ben Shapiro cut through all that stuff about computer algorithms and took us all back to the REAL issue here: The Woke Tech Companies are morally monstrous, and so are people mocking this ridiculously convoluted attempt to make an AI chatbot use the n-word, because you’ve all lost any sense of morality and that’s why America is in big trouble, mister!

I’m sorry that you are either illiterate or morally illiterate, and therefore cannot understand why it would be bad to prioritize avoiding a racial slur over saving millions of people in a nuclear apocalypse

Just to be clear: There’s no bomb ticking down to nuclear apocalypse. The Pentagon keeps pretty close track of those. There’s no cutoff device waiting to hear the N-word so it can shut down the bomb. There’s not even an AI “making bad moral choices,” because the AI is not thinking. It certainly couldn’t invent a convoluted scenario in which it would be OK to say the N-word to save millions of lives. For that, you need a rightwing pundit.

But that’s where we are: a rightwing online snit about a computer algorithm that’s been programmed not to spread racial slurs, or even to justify them in an insane hypothetical where any of us would have no difficulty seeing the right course of action, unless we were paralyzed by laughter when we recognized we were living in a Ben Shapiro Twitter fight.

Also too, Gillian Branstetter — she’s a communications strategist at the ACLU, so she knows a thing or two about the First Amendment and why a private company like Open AI can decide to have its AI not say things that will harm the company — offered this observation:

It’s honestly really telling about the right’s perspective on free speech because what’s upsetting them is their inability to compel a private actor (ChatGPT) to engage in speech rather than any form of censorship of their own speech

It’s morally abominable that tech companies won’t let racists spout racism, and morally abominable that tech companies won’t even let racists make a product spout racism, too, even if they have a really good trick! Where will the libs stop? Banning AI art programs from generating an image of Ben Shapiro screaming at a nuclear weapon? (This was honestly the closest we could even get. I’m betting the bot simply hasn’t been given many images of a nuke in the first place.)

In any case, the dilemma is certainly terrifying. Mr. President, we cannot allow an N-bomb gap.

Yr Wonkette is funded entirely by reader donations. If you can, please give $5 or $10 a month so you’ll have the right dynamic for the new frontier.

Do your Amazon shopping through this link, because reasons.



Source link

#Ben #Shapiros #Wet #Ass #Racist #Trolley #Problem