Why Sora, OpenAI’s new text-to-video tool, is raising eyebrows

Sora is ChatGPT maker OpenAI’s new text-to-video generator. Here’s what we know about the new tool provoking concern and excitement in equal measure.

ADVERTISEMENT

The maker of ChatGPT is now diving into the world of video created by artificial intelligence (AI).

Meet Sora – OpenAI’s new text-to-video generator. The tool, which the San Francisco-based company unveiled on Thursday, uses generative AI to instantly create short videos based on written commands.

Sora isn’t the first to demonstrate this kind of technology. But industry analysts point to the high quality of the tool’s videos displayed so far, and note that its introduction marks a significant leap for both OpenAI and the future of text-to-video generation overall.

Still, as with all things in the rapidly growing AI space today, such technology also raises fears about potential ethical and societal implications. Here’s what you need to know.

What can Sora do and can I use it yet?

Sora is a text-to-video generator – creating videos up to 60 seconds long based on written prompts using generative AI. The model can also generate video from an existing still image.

Generative AI is a branch of AI that can create something new. Examples include chatbots, like OpenAI’s ChatGPT, and image-generators such as DALL-E and Midjourney. 

Getting an AI system to generate videos is newer and more challenging but relies on some of the same technology.

Sora isn’t available for public use yet (OpenAI says it’s engaging with policymakers and artists before officially releasing the tool) and there’s a lot we still don’t know. But since Thursday’s announcement, the company has shared a handful of examples of Sora-generated videos to show off what it can do.

OpenAI CEO Sam Altman also took to X, the platform formerly known as Twitter, to ask social media users to send in prompt ideas. 

He later shared realistically detailed videos that responded to prompts like “two golden retrievers podcasting on top of a mountain” and “a bicycle race on ocean with different animals as athletes riding the bicycles with drone camera view”.

While Sora-generated videos can depict complex, incredibly detailed scenes, OpenAI notes that there are still some weaknesses – including some spatial and cause-and-effect elements. 

For example, OpenAI adds on its website, “a person might take a bite out of a cookie, but afterward, the cookie may not have a bite mark”.

What other AI-generated video tools are out there?

OpenAI’s Sora isn’t the first of its kind. Google, Meta, and the startup Runway ML are among companies that have demonstrated similar technology.

Still, industry analysts stress the apparent quality and impressive length of Sora videos shared so far. 

Fred Havemeyer, head of US AI and software research at Macquarie, said that Sora’s launch marks a big step forward for the industry.

“Not only can you do longer videos, I understand up to 60 seconds, but also the videos being created look more normal and seem to actually respect physics and the real world more,” Havemeyer said. 

“You’re not getting as many ‘uncanny valley’ videos or fragments on the video feeds that look… unnatural”.

While there has been “tremendous progress” in AI-generated video over the last year – including Stable Video Diffusion’s introduction last November – Forrester senior analyst Rowan Curran said such videos have required more “stitching together” for character and scene consistency.

ADVERTISEMENT

The consistency and length of Sora’s videos, however, represents “new opportunities for creatives to incorporate elements of AI-generated video into more traditional content, and now even to generate full-blown narrative videos from one or a few prompts,” Curran told The Associated Press via email on Friday.

What are the potential risks?

Although Sora’s abilities have astounded observers since Thursday’s launch, anxiety over the ethical and societal implications of AI-generated video uses also remains.

Havemeyer points to the substantial risks in 2024’s potentially fraught election cycle, for example. 

Having a “potentially magical” way to generate videos that may look and sound realistic presents a number of issues within politics and beyond, he added – pointing to fraud, propaganda, and misinformation concerns.

“The negative externalities of generative AI will be a critical topic for debate in 2024,” Havemeyer said. “It’s a substantial issue that every business and every person will need to face this year”.

ADVERTISEMENT

Tech companies are still calling the shots when it comes to governing AI and its risks as governments around the world work to catch up. 

In December, the European Union reached a deal on the world’s first comprehensive AI rules, but the act won’t take effect until two years after final approval.

On Thursday, OpenAI said it was taking important safety steps before making Sora widely available.

“We are working with red teamers – domain experts in areas like misinformation, hateful content, and bias – who will be adversarially testing the model,” the company wrote. 

“We’re also building tools to help detect misleading content such as a detection classifier that can tell when a video was generated by Sora”.

ADVERTISEMENT

OpenAI’s Vice President of Global Affairs Anna Makanju reiterated this when speaking on Friday at the Munich Security Conference, where OpenAI and 19 other technology companies pledged to voluntarily work together to combat AI-generated election deepfakes

She noted the company was releasing Sora “in a manner that is quite cautious”.

At the same time, OpenAI has revealed limited information about how Sora was built. 

OpenAI’s technical report did not disclose what imagery and video sources were used to train Sora – and the company did not immediately respond to a request for further comment on Friday.

The Sora release also arrives amid the backdrop of lawsuits against OpenAI and its business partner Microsoft by some authors and The New York Times over its use of copyrighted works of writing to train ChatGPT.

ADVERTISEMENT

Source link

#Sora #OpenAIs #texttovideo #tool #raising #eyebrows

These AI tools could help boost your academic research

The future of academia is likely to be transformed by AI language models such as ChatGPT. Here are some other tools worth knowing about.

ADVERTISEMENT

“ChatGPT will redefine the future of academic research. But most academics don’t know how to use it intelligently,” Mushtaq Bilal, a postdoctoral researcher at the University of Southern Denmark, posted on X.

Academia and artificial intelligence (AI) are becoming increasingly intertwined, and as AI continues to advance, it is likely that academics will continue to either embrace its potential or voice concerns about its risks.

“There are two camps in academia. The first is the early adopters of artificial intelligence, and the second is the professors and academics who think AI corrupts academic integrity,” Bilal told Euronews Next.

He places himself firmly in the first camp.

The Pakistani-born and Denmark-based professor believes that if used thoughtfully, AI language models could help democratise education and even give way to more knowledge.

Many experts have pointed out that the accuracy and quality of the output produced by language models such as ChatGPT are not trustworthy. The generated text can sometimes be biased, limited or inaccurate.

But Bilal says that understanding those limitations, paired with the right approach, can make language models “do a lot of quality labour for you,” notably for academia.

Incremental prompting to create a ‘structure’

To create an academia-worthy structure, Bilal says it is fundamental to master incremental prompting, a technique traditionally used in behavioural therapy and special education.

It involves breaking down complex tasks into smaller, more manageable steps and providing prompts or cues to help the individual complete each one successfully. The prompts then gradually become more complicated.

In behavioural therapy, incremental prompting allows individuals to build their sense of confidence. In language models, it allows for “way more sophisticated answers”.

In a thread on X (formerly Twitter), Bilal showed how he managed to get ChatGPT to provide a “brilliant outline” for a journal article using incremental prompting.

In his demonstration, Bilal started by asking ChatGPT about specific concepts relevant to his work, then about authors and their ideas, guiding the AI-driven chatbot through the contextual knowledge pertinent to his essay.

“Now that ChatGPT has a fair idea about my project, I ask it to create an outline for a journal article,” he explained, before declaring the results he obtained would likely save him “20 hours of labour”.

“If I just wrote a paragraph for every point in the outline, I’d have a decent first draft of my article”.

Incremental prompting also allows ChatGPT and other AI models to help when it comes to “making education more democratic,” Bilal said.

Some people have the luxury of discussing with Harvard or Oxford professors potential academic outlines or angles for scientific papers, “but not everyone does,” he explained.

“If I were in Pakistan, I would not have access to Harvard professors but I would still need to brainstorm ideas. So instead, I could use AI apps to have an intelligent conversation and help me formulate my research”.

Bilal recently made ChatGPT think and talk like a Stanford professor. Then, to fact-check how authentic the output was, he asked the same questions to a real-life Stanford professor. The results were astonishing.

ADVERTISEMENT

ChatGPT is only one of the many AI-powered apps you can use for academic writing, or to mimic conversations with renowned academics.

Here are other AI-driven software to help your academic efforts, handpicked by Bilal.

In Bilal’s own words: “If ChatGPT and Google Scholar got married, their child would be Consensus — an AI-powered search engine”.

Consensus looks like most search engines but what sets it apart is that you ask Yes/No questions, to which it provides answers with the consensus of the academic community.

Users can also ask Consensus about the relationship between concepts and about something’s cause and effect. For example: Does immigration improve the economy?

ADVERTISEMENT

Consensus would reply to that question by stating that most studies have found that immigration generally improves the economy, providing a list of the academic papers it used to arrive at the consensus, and ultimately sharing the summaries of the top articles it analysed.

The AI-powered search engine is only equipped to respond to six topics: economics, sleep, social policy, medicine, and mental health and health supplements.

Elicit, “the AI research assistant” according to its founders, also uses language models to answer questions. Still, its knowledge is solely based on research, enabling “intelligent conversations” and brainstorming with a very knowledgeable and verified source.

The software can also find relevant papers without perfect keyword matches, summarise them and extract key information.

Although language models like ChatGPT are not designed to intentionally deceive, it has been proven they can generate text that is not based on factual information, and include fake citations to papers that don’t exist.

ADVERTISEMENT

But there is an AI-powered app that gives you real citations to actually published papers – Scite.

“This is one of my favourite ones to improve workflows,” said Bilal.

Similar to Elicit, upon being asked a question, Scite delivers answers with a detailed list of all the papers cited in the response.

“Also, if I make a claim and that claim has been refuted or corroborated by various people or various journals, Scite gives me the exact number. So this is really very, very powerful”.

“If I were to teach any seminar on writing, I would teach how to use this app”.

ADVERTISEMENT

“Research Rabbit is an incredible tool that FAST-TRACKS your research. Best part: it’s FREE. But most academics don’t know about it,” tweeted Bilal.

Called by its founders “the Spotify of research,” Research Rabbit allows adding academic papers to “collections”.

These collections allow the software to learn about the user’s interests, prompting new relevant recommendations.

Research Rabbit also allows visualising the scholarly network of papers and co-authorships in graphs, so that users can follow the work of a single topic or author and dive deeper into their research.

ChatPDF is an AI-powered app that makes reading and analysing journal articles easier and faster.

ADVERTISEMENT

“It’s like ChatGPT, but for research papers,” said Bilal.

Users start by uploading the research paper PDF into the AI software and then start asking it questions.

The app then prepares a short summary of the paper and provides the user with examples of questions that it could answer based on the full article.

What promise does AI hold for the future of research?

The development of AI will be as fundamental “as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone,” wrote Bill Gates in the latest post on his personal blog, titled ‘The Age of AI Has Begun’.

“Computers haven’t had the effect on education that many of us in the industry have hoped,” he wrote. 

ADVERTISEMENT

“But I think in the next five to 10 years, AI-driven software will finally deliver on the promise of revolutionising the way people teach and learn”.



Source link

#tools #boost #academic #research

From Musk and Tusk to Swift: Figures who defined 2023

From Iran to Hollywood, in the domains of space travel, football and tech, 2023 was a year shaped by strong personalities. Some inspired us, most made us reflect, and others occasionally annoyed us. As the year comes to an end, FRANCE 24 has selected some of the personalities leaving a mark on 2023.

  • Narges Mohammadi, fighting for human rights in Iran

Iranian activist Narges Mohammadi was awarded the Nobel Peace Prize “for her fight against the oppression of women in Iran and her fight to promote human rights and freedom for all”. 

The journalist plays a key role in Iran’s “Women, Life, Freedom” movement garnering global attention since the death of Mahsa Amini while in custody of Iran’s police in September 2022. The movement advocates for the abolition of mandatory hijab laws and the elimination of various forms of discrimination against women in Iran.

Arrested for the first time 22 years ago, Mohammadi has been held in Evin Prison, known for its mistreatment of detainees, since 2021.

From behind bars, where she has spent much of the last two decades on charges like “propaganda”, “rebellion”, and “endangering national security”, she continues her fight against what she terms a “tyrannical and misogynistic religious regime”.

At the Nobel Prize ceremony in Oslo, her 17-year-old twins living in exile in France since 2015 delivered her speech.

Read moreNarges Mohammadi: Iran’s defiant voice, even behind bars

  • Donald Tusk, bringing Poland back into the fold

After eight years of nationalist rule by the Law and Justice Party (PiS), Poland’s Donald Tusk is back in his country’s top job.

Already having served as prime minister from 2007 to 2014, the committed europhile and former president of the European Council (2014 – 2019) promises to put his country solidly back on democratic rails.

His priorities are clear: to restore the rule of law and rebuild Poland’s credibility within the EU. His coalition also advocates abortion in a country where the practice is only permitted in cases of rape, incest, or danger to the life or health of the mother.

However, Tusk will have to contend with Poland’s far right, which still retains meaningful political power despite losing the premiership. 

  • Taylor Swift, shining so brightly

In a world where celebrity can be fleeting, Taylor Swift has never been far from the limelight. From Nashville to New York, the 34-year-old American singer has built a romantic-pop musical empire that has captivated millions of fans, known as “Swifties”, worldwide.

Named the Person of the Year 2023 by Time magazine on December 6, Swift, who started her career more than 15 years ago, boasts a long list of world records. Her albums frequently top the charts in the United States – since she debuted in 2006, 13 of her 14 albums have reached number one in US sales.

In October, Swift released concert film, “The Eras Tour”, which went on to become the highest-grossing concert film of all time, earning $249.9 million worldwide. 

In September, the singer demonstrated her cultural force. After a short message on Instagram encouraging her 272 million followers to register to vote, the website she directed them to – the nonprofit Vote.org – recorded more than 35,000 registrations in just one day.

Committed to maintaining musical independence, the feminist icon re-recorded the tracks from her first six albums in 2019 to regain control of the rights after her former record label was acquired by music industry magnate Scooter Braun. 

  • Hollywood’s striking writers and actors, fighting and winning

In May 2023, Hollywood ignited. The industry’s writers, followed by actors in July, went on strike. The stakes in the negotiations included both base and residual pay – which actors say has been undercut by inflation and the business model of streaming – and the threat of unregulated use of artificial intelligence (AI) by studios.

The strike – the most significant since 1960 – paralysed film and series production for several months, costing the US economy at least $6 billion.

At the heart of the protest were fears that studios would use AI to generate scripts or clone the voices and images of actors without compensation. The strikers, supported by the public, refused to back down.

They chanted “When we fight, we win”, a slogan that has become the rallying cry for workers across the United States, from the automotive industry to hospitality. Prominent names in cinema join the picket lines, including actress and producer Jessica Chastain and “Breaking Bad” star Bryan Cranston.

In September, the writers reached a salary agreement with the studios which included protections relating to the use of AI. Actors finally returned to sets in November after 118 days off the job.

  • Elon Musk, genius or man-child?

Elon Musk will leave 2023 an even more divisive figure than when he entered it. With a fortune of $250 billion, Musk has grand ambitions to conquer space, roads, and social networks.

Twitter, renamed X in late July after Musk bought the company in October 2022, has had a chaotic year: mass layoffs, a showdown with the EU over misinformation, controversy over certified accounts, and plummeting advertising revenues. Its survival is now an open question after Musk told advertisers who suspended their advertising over his repost of a tweet widely deemed anti-Semitic to “Go f—k yourself”.

Beyond X, Musk’s company SpaceX has been instrumental in the war in Ukraine with its satellite internet product Starlink. It has also made progress on the Starship Rocket, which could revolutionise space transportation. However, the two launches this year didn’t go as planned, raising concerns about the project’s feasibility.

In the workshops of Tesla, his electric car company, an international strike movement that is still gaining momentum has already tarnished his image. 

Finally, his Neuralink project, which aims to develop brain implants to assist paralysed individuals or those with neurological diseases, has also faced criticism. Some experts believe the risks this project poses to are too high.

Whether you love him or hate him, it seems Musk can’t stay out of the headlines. 

  • Jennifer Hermoso, the face of change for Spanish football

Until this summer, Jennifer Hermoso was only known by football enthusiasts. But the wave of support she received after the Women’s World Cup has made her a symbol.

As the Spanish player was being crowned world champion in Sydney, she was unexpectedly kissed on the mouth by Luis Rubiales, then president of the Spanish Football Federation. The image, broadcast live on television, circled the globe and sparked outrage.

A few days later, Hermoso broke her silence and denounced an “impulse-driven, sexist, out of place act”. She filed a complaint against Rubiales, who claimed it was just a consensual “little kiss”.


<blockquote class=”twitter-tweet”><p lang=”en” dir=”ltr”>Official Announcement. August 25th,2023. <a href=”https://t.co/lQb18IGsk2″>pic.twitter.com/lQb18IGsk2</a></p>&mdash; Jenn1 Hermos0 (@Jennihermoso) <a href=”https://twitter.com/Jennihermoso/status/1695155154067087413?ref_src=twsrc%5Etfw”>August 25, 2023</a></blockquote> <script async src=”https://platform.twitter.com/widgets.js” charset=”utf-8″></script>

Ultimately forced to resign, Rubiales was charged with sexual assault by the courts and suspended for three years from any football-related activity by FIFA. The scandal led to a boycott by Spanish players of the national team for several days until the federation promised “immediate and profound changes”.

  • Mortaza Behboudi, Afghan journalist fighting for press freedom

Most of 2023 unfolded behind bars for Franco-Afghan journalist Mortaza Behboudi. His crime? Simply doing his job. 

It all started on January 7 when he was arrested on charges of espionage in Kabul by the Taliban. During his 9 months in prison, he was regularly tortured and threatened with death.

Reporters Without Borders (RSF) and its support committee, created by his wife Aleksandra Mostovaja, moved heaven and earth to secure his release. Their determination eventually paid off, and he was released on October 18.

Working for French news outlets including France Télévisions, TV5Monde, Libération, and Mediapart, he already wants to return to Afghanistan. “My fight is to give a voice to those who don’t have it,” he told FRANCE 24.

According to the annual round-up compiled by RSF, 45 journalists were killed worldwide in connection with their work (as of 1 December 2023). 

  • Rayyanah Barnawi, first Saudi woman in space

On May 21, Rayyanah Barnawi became the first Saudi woman to travel to the International Space Station. A biomedical science graduate, she dedicated her ten-day mission to the field of cancer stem cell research.

Her journey is an important symbol for Saudi Arabia, where women face restrictions. Barnawi is emblematic of a new generation of highly educated and ambitious Saudi women ready to take on important roles in the historically conservative society.

The journey is also part of the Saudi monarchy’s strategy to renew its international image.

  • Sam Altman, the father of ChatGPT

At 38, Sam Altman is one of the most prominent names in the tech world. He is the CEO of OpenAI, the San Francisco-based AI lab that created ChatGPT – a chatbot with 100 million weekly users now disrupting the technology ecosystem.

On top of being a prolific entrepreneur, Altman officially launched Worldcoin, a new cryptocurrency with an identity verification system using the human iris. Like Elon Musk, with whom he co-founded OpenAI in 2015, his grand ambition and sometimes controversial methods have earned him criticism. Some accuse him of prioritising security over innovation.

In November 2023, he was dismissed by the board of directors of OpenAI, only to be reinstated in his position after most of the company’s employees threatened to leave the group.

Watch moreSam Altman to return as OpenAI CEO after his tumultuous ouster

His activity is not restricted to entrepreneurship. In May, Altman invested $375 million in Helion, a nuclear fusion startup.

  • Barbie, a triumphant return

For better or worse, Barbie has been a icon since she first hit store shelves in 1959. The 29-centimetre doll has had an impact on generations of girls and women: long reviled by feminists, she had an image makeover in 2023.

This summer, Barbie experienced a triumphant return thanks to a film directed by Greta Gerwig starring Margot Robbie and Ryan Gosling. Released in July, the film is a critical and commercial success praised for its intelligent script, impeccable performances, and feminist message.

Gerwig created a world where Barbie is a rebellious icon fighting against gender stereotypes, surrounded by strong and independent female characters.

In the process, Gerwig became the first woman to direct a film grossing more than a billion dollars at the box office. The 40-year-old capped off her stellar year by being named jury president at Cannes 2024. 

This article is translated from the original in French. 

Source link

#Musk #Tusk #Swift #Figures #defined

‘Counterfeit people’: The dangers posed by Meta’s AI celebrity lookalike chatbots

Meta announced on Wednesday the arrival of chatbots with personalities similar to certain celebrities, with whom it will be possible to chat. Presented as an entertaining evolution of ChatGPT and other forms of AI, this latest technological development could prove dangerous.

Meta (formerly known as Facebook) sees these as “fun” artificial intelligence. Others, however, feel that this latest technological development could mark the first step towards creating “the most dangerous artefacts in human history”, to quote from American philosopher Daniel C. Dennett’s essay about “counterfeit people”

On Wednesday, September 27, the social networking giant announced the launch of 28 chatbots (conversational agents), which supposedly have their own personalities and have been specially designed for younger users. These include Victor, a so-called triathlete who can motivate “you to be your best self”, and Sally, the “free-spirited friend who’ll tell you when to take a deep breath”.

Internet users can also chat to Max, a “seasoned sous chef” who will give you “culinary tips and tricks”, or engage in a verbal joust with Luiz, who “can back up his trash talk”. 

A chatbot that looks like Paris Hilton

To reinforce the idea that these chatbots have personalities and are not simply an amalgam of algorithms, Meta has given each of them a face. Thanks to partnerships with celebrities, these robots look like American jet-setter and DJ Paris Hilton, TikTok star Charli D’Amelio and American-Japanese tennis player Naomi Osaka.

Read moreShould we worry? ChatGPT passes Ivy League business exam

And that’s not all. Meta has opened Facebook and Instagram accounts for each of its conversational agents to give them an existence outside chat interfaces and is working on giving them a voice by next year. The parent company of Mark Zuckerberg‘s empire was also looking for screenwriters who can “write character, and other supporting narrative content that appeal to wide audiences”.

Meta may present these 28 chatbots as an innocent undertaking to massively distract young internet users, but all these efforts point towards an ambitious project to build AIs that resemble humans as much as possible, writes The Rolling Stone.  

This race to “counterfeit people” worries many observers, who are already concerned about recent developments made in large language model (LLM) research such as ChatGPT and Llama 2, its Facebook counterpart. Without going as far as Dennett, who is calling for people like Zuckerberg to be locked up, “there are a number of thinkers who are denouncing these major groups’ deliberately deceptive approach”, said Ibo van de Poel, professor of ethics and technology at the Delft University of Technology in the Netherlands.

AIs with personalities are ‘literally impossible’

The idea of conversational agents “with a personality is literally impossible”, said van de Poel. Algorithms are incapable of demonstrating “intention in their actions or ‘free will’, two characteristics that are considered to be intimately linked to the idea of a personality”.

Meta and others can, at best, imitate certain traits that make up a personality. “It must be technologically possible, for example, to teach a chatbot to act like the person they represent,” said van de Poel. For instance, Meta’s AI Amber, which is supposed to resemble Hilton, may be able to speak the same way as its human alter ego. 

The next step will be to train these LLMs to express the same opinions as the person they resemble. This is a much more complicated behaviour to programme, as it involves creating a sort of accurate mental picture of all of a person’s opinions. There is also a risk that chatbots with personalities could go awry. One of the conversational agents that Meta tested expressed “misogynistic” opinions, according to the Wall Street Journal, which was able to consult internal company documents. Another committed the “mortal sin” of criticising Zuckerberg and praising TikTok.

To build these chatbots, Meta explains that it set out to give them “unique personal stories”. In other words, these AIs’ creators have written biographies for them in the hopes that they will be able to develop a personality based on what they have read about themselves. “It’s an interesting approach, but it would have been beneficial to add psychologists to these teams to get a better understanding of personality traits”, said Anna Strasser, a German philosopher who was involved in a project to create a large language model capable of philosophising.

Meta’s latest AI project is clearly driven by a thirst for profit. “People will no doubt be prepared to pay to be able to talk and have a direct relationship with Paris Hilton or another celebrity,” said Strasser.

The more users feel like they are speaking with a human being, “the more comfortable they’ll feel, the longer they’ll stay and the more likely they’ll come back”, said van de Poel. And in the world of social media, time – spent on Facebook and its ads –  is money.

Tool, living thing or somewhere between?

It is certainly not surprising that Meta’s first foray into AI with “personality” are chatbots aimed primarily at teenagers. “We know that young people are more likely to be anthropomorphic,” said Strasser.

However, the experts interviewed feel that Meta is playing a dangerous game by stressing the “human characteristics” of their AIs. “I really would have preferred if this group had put more effort into explaining the limits of these conversational agents, rather than trying to make them seem more human”, said van de Poel.

Read moreChatGPT: Cybercriminals salivate over world-beating AI chatbot

The emergence of these powerful LLMs has upset “the dichotomy between what is a tool or object and what is a living thing. These ChatGPTs are a third type of agent that stands somewhere between the two extremes”, said Strasser. Human beings are still learning how to interact with these strange new entities, so by making people believe that a conversational agent can have a personality Meta is suggesting that it be treated more like another human being than a tool. 

“Internet users tend to trust what these AIs say” which make them dangerous, said van de Poel. This is not just a theoretical risk: a man in Belgium ended up committing suicide in March 2023 after discussing the consequences of global warming with a conversational agent for six weeks.

Above all, if the boundary between the world of AIs and humans is eventually blurred completely, “this could potentially destroy trust in everything we find online because we won’t know who wrote what”, said Strasser. This would, as Dennett warned in his essay, open the door to “destroying our civilisation. Democracy depends on the informed (not misinformed) consent of the governed [which cannot be obtained if we no longer know what and whom to trust]”.

It remains to be seen if chatting with an AI lookalike of Hilton means that we are on the path to destroying the world as we know it. 

This article has been translated from the original in French

Source link

#Counterfeit #people #dangers #posed #Metas #celebrity #lookalike #chatbots

How should we regulate generative AI, and what will happen if we fail?

By Rohit Kapoor, Vice Chairman and CEO, EXL

Regulation must encourage collaboration and research between all the major players, from experts in the field to policy-makers and ethicists, Rohit Kapoor writes.

Generative AI is experiencing rapid growth and expansion. 

There’s no question as to whether this technology will change the world — all that remains to be seen is how long it will take for the transformative impact to be realised and how exactly it will manifest in each industry and niche. 

Whether it’s fully automated and targeted consumer marketing, medical reports generated and summarised for doctors, or chatbots with distinct personality types being tested by Instagram, generative AI is driving a revolution in just about every sector.

The potential benefits of these advancements are monumental. Quantifying the hype, a recent report by Bloomberg Intelligence predicted an explosion in generative AI market growth, from $40 billion (€36.5bn) in 2022 to 1.3 trillion (€1.18tn) in the next ten years. 

But in all the excitement to come, it’s absolutely critical that policy-makers and corporations alike do not lose sight of the risks of this technology.

These large language models, or LLMs, present dangers which not only threaten the very usefulness of the information they produce but could also prove threatening in entirely unintentional ways — from bias to blurring the lines between real and artificial to loss of control.

Who’s responsible?

The responsibility for taking the reins on regulation falls naturally with governments and regulatory bodies, but it should also extend beyond them. The business community must self-govern and contribute to principles that can become regulations while policy-makers deliberate.

Two core principles should be followed as soon as possible by those developing and running generative AI, in order to foster responsible use and mitigate negative impacts. 

First, large language models should only be applied to closed data sets to ensure safety and confidentiality. 

Second, all development and adoption of use cases leveraging generative AI should have the mandatory oversight of professionals to ensure “humans in the loop”.

These principles are essential for maintaining accountability, transparency, and fairness in the use of generative AI technologies.

From there, three main areas will need attention from a regulatory perspective.

Maintaining our grip on what’s real

The capabilities of generative AI to mimic reality are already quite astounding, and it’s improving all the time. 

So far this year, the internet has been awash with startling images like the Pope in a puffer jacket or the Mona Lisa as she would look in real life. 

And chatbots are being deployed in unexpected realms like dating apps — where the introduction of the technology is reportedly intended to reduce “small talk”.

The wider public should feel no guilt in enjoying these creative outputs, but industry players and policy-makers must be alive of the dangers of this mimicry. 

Amongst them are identity theft and reputational damage. 

Distinguishing between AI-generated content and content genuinely created by humans is a significant challenge, and regulation should consider the consequences and surveillance aspects of it.

Clear guidelines are needed to determine the responsibility of platforms and content creators to label AI-generated content. 

Robust verification systems like watermarking or digital signatures would support this authentication process.

Tackling imperfections that lead to bias

Policy-makers must set about regulating the monitoring and validation of imperfections in the data, algorithms and processes used in generative AI. 

Bias is a major factor. Training data can be biased or inadequate, resulting in a bias in the AI itself. 

For example, this might cause a company chatbot to deprioritise customer complaints that come from customers of a certain demographic or a search engine to throw up biased answers to queries. And biases in algorithms can perpetuate those unfair outcomes and discrimination.

Regulations need to force the issue of transparency and push for clear documentation of processes. This would help ensure that processes can be explained and that accountability is upheld. 

At the same time, it would enable scrutiny of generative AI systems, including safeguarding of intellectual property (IP) and data privacy — which, in a world where data is the new currency, is crucially important.

On top of this, regulating the documentation involved would help prevent “hallucinations” by AI — which are essentially where an AI gives a response that is not justified by the data used to train it.

Preventing the tech from becoming autonomous and uncontrollable

An area for special caution is the potential for an iterative process of AI creating subsequent generations of AI, eventually leading to AI that is misdirected or compounding errors. 

The progression from first-generation to second- and third-generation AI is expected to occur rapidly. 

The fundamental requirement of the self-declaration of AI models, where each model openly acknowledges its AI nature, is of utmost importance. 

However, enabling and regulating this self-declaration poses a significant practical challenge. One approach could involve mandating hardware and software companies to implement hardcoded restrictions, allowing only a certain threshold of AI functionality. 

Advanced functionality above such a threshold could be subject to an inspection of systems, audits, testing for compliance with safety standards, restrictions on degrees of deployment and levels of security, etc. Regulators should define and enforce these restrictions to mitigate risks.

We should be acting quickly and together

The world-changing potential of generative AI demands a coordinated response. 

If each country and jurisdiction develops its own rules, the adoption of the technology — which has the potential for enormous good in business, medicine, science and more — could be crippled. 

Regulation must encourage collaboration and research between all the major players, from experts in the field to policy-makers and ethicists. 

With a coordinated approach, the risks can be sensibly mitigated, and the full benefits of generative AI realised, unlocking its huge potential.

Rohit Kapoor is the Vice Chairman and CEO of EXL, a data analytics and digital operations and solutions company.

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

Source link

#regulate #generative #happen #fail

Explained | The Hiroshima process that takes AI governance global

The annual Group of Seven (G7) Summit, hosted by Japan, took place in Hiroshima on May 19-21, 2023. Among other matters, the G7 Hiroshima Leaders’ Communiqué initiated the Hiroshima AI Process (HAP) – an effort by this bloc to determine a way forward to regulate artificial intelligence (AI).

The ministerial declaration of the G7 Digital and Tech Ministers’ Meeting, on April 30, 2023, discussed “responsible AI” and global AI governance, and said, “we reaffirm our commitment to promote human-centric and trustworthy AI based on the OECD AI Principles and to foster collaboration to maximise the benefits for all brought by AI technologies”.

Even as the G7 countries are using such fora to deliberate AI regulation, they are acting on their own instead of waiting for the outcomes from the HAP. So while there is an accord to regulate AI, the discord – as evident in countries preferring to go their own paths – will also continue.

What is the Hiroshima AI process?

The communiqué accorded more importance to AI than the technology has ever received in such a forum – even as G7 leaders were engaged with other issues like the war in Ukraine, economic security, supply chain disruptions, and nuclear disarmament. It said that the G7 is determined to work with others to “advance international discussions on inclusive AI governance and interoperability to achieve our common vision and goal of trustworthy AI, in line with our shared democratic value”.

To quote further at length:

“We recognise the need to immediately take stock of the opportunities and challenges of generative AI, which is increasingly prominent across countries and sectors, and encourage international organisations such as the OECD to consider analysis on the impact of policy developments and Global Partnership on AI (GPAI) to conduct practical projects. In this respect, we task relevant ministers to establish the Hiroshima AI process, through a G7 working group, in an inclusive manner and in cooperation with the OECD and GPAI, for discussions on generative AI by the end of this year.

These discussions could include topics such as governance, safeguard of intellectual property rights including copyrights, promotion of transparency, response to foreign information manipulation, including disinformation, and responsible utilisation of these technologies.”

The HAP is likely to conclude by December 2023. The first meeting under this process was held on May 30. Per the communiqué, the process will be organised through a G7 working group, although the exact details are not clear.

Why is the process notable?

While the communiqué doesn’t indicate the expected outcomes from the HAP, there is enough in there to indicate what values and norms will guide it and where it will derive its guiding principles, based on which to govern AI, from.

The communiqué as well as the ministerial declaration also say more than once that AI development and implementation must be aligned with values such as freedom, democracy, and human rights. Values need to be linked to principles that drive regulation. To this end, the communiqué also stresses fairness, accountability, transparency, and safety.

The communiqué also spoke of “the importance of procedures that advance transparency, openness, and fair processes” for developing responsible AI. “Openness” and “fair processes” can be interpreted in different ways, and the exact meaning of the “procedures that advance them” is not clear.

What does the process entail?

An emphasis on freedom, democracy, and human rights, and mentions of “multi-stakeholder international organisations” and “multi-stakeholder processes” indicate that the HAP isn’t expected to address AI regulation from a State-centric perspective. Instead, it exists to account for the importance of involving multiple stakeholders in various processes and to ensure the latter are fair and transparent.

The task before the HAP is really challenging considering the divergence among G7 countries in, among other things, regulating risks arising out of applying AI. It can help these countries develop a common understanding on some key regulatory issues while ensuring that any disagreement doesn’t result in complete discord.

For now, there are three ways in which the HAP can play out:

1. It enables the G7 countries to move towards a divergent regulation based on shared norms, principles and guiding values;

2. It becomes overwhelmed by divergent views among the G7 countries and fails to deliver any meaningful solution; or

3. It delivers a mixed outcome with some convergence on finding solutions to some issues but is unable to find common ground on many others.

Is there an example of how the process can help?

The matter of intellectual property rights (IPR) offers an example of how the HAP can help. Here, the question is whether training a generative AI model, like ChatGPT, on copyrighted material constitutes a copyright violation. While IPR in the context of AI finds mention in the communiqué, the relationship between AI and IPR and in different jurisdictions is not clear. There have been several conflicting interpretations and judicial pronouncements.

The HAP can help the G7 countries move towards a consensus on this issue by specifying guiding rules and principles related to AI and IPR. For example, the process can bring greater clarity to the role and scope of the ‘fair use’ doctrine in the use of AI for various purposes.

Generally, the ‘fair use’ exception is invoked to allow activities like teaching, research, and criticism to continue without seeking the copyright-owner’s permission to use their material. Whether use of copyrighted materials in datasets for machine learning is fair use is a controversial issue.

As an example, the HAP can develop a common guideline for G7 countries that permits the use of copyrighted materials in datasets for machine-learning as ‘fair use’, subject to some conditions. It can also differentiate use for machine-learning per se from other AI-related uses of copyrighted materials.

This in turn could affect the global discourse and practice on this issue.

The stage has been set…

The G7 communiqué states that “the common vision and goal of trustworthy AI may vary across G7 members.” The ministerial declaration has a similar view: “We stress the importance of international discussions on AI governance and interoperability between AI governance frameworks, while we recognise that like-minded approaches and policy instruments to achieve the common vision and goal of trustworthy AI may vary across G7 members.” This acknowledgment, taken together with other aspects of the HAP, indicates that the G7 doesn’t expect to harmonise their policies on regulations.

On the other hand, the emphasis on working with others, including OECD countries and on developing an interoperable AI governance framework, suggests that while the HAP is a process established by the G7, it still has to respond to the concerns of other country-groups as well as the people and bodies involved in developing international technical standards in AI.

It’s also possible that countries that aren’t part of the G7 but want to influence the global governance of AI may launch a process of their own like the HAP.

Overall, the establishment of the HAP makes one thing clear: AI governance has become a truly global issue that is likely to only become more contested in future.

Krishna Ravi Srinivas is with RIS, New Delhi. Views expressed are personal.

Source link

#Explained #Hiroshima #process #takes #governance #global

What we lose when we work with a ‘giant AI’ like ChatGPT

Recently, ChatGPT and its ilk of ‘giant artificial intelligences’ (Bard, Chinchilla, PaLM, LaMDA, et al.), or gAIs, have been making several headlines.

ChatGPT is a large language model (LLM). This is a type of (transformer-based) neural network that is great at predicting the next word in a sequence of words. ChatGPT uses GPT4 – a model trained on a large amount of text on the internet, which its maker OpenAI could scrape and could justify as being safe and clean to train on. GPT4 has one trillion parameters now being applied in the service of, per the OpenAI website, ensuring the creation of “artificial general intelligence that serves all of humanity”.

Yet gAIs leave no room for democratic input: they are designed from the top-down, with the premise that the model will acquire the smaller details on its own. There are many use-cases intended for these systems, including legal services, teaching students, generating policy suggestions and even providing scientific insights. gAIs are thus intended to be a tool that automates what has so far been assumed to be impossible to automate: knowledge-work.

What is ‘high modernism’?

In his 1998 book Seeing Like a State, Yale University professor James C. Scott delves into the dynamics of nation-state power, both democratic and non-democratic, and its consequences for society. States seek to improve the lives of their citizens, but when they design policies from the top-down, they often reduce the richness and complexity of human experience to that which is quantifiable.

The current driving philosophy of states is, according to Prof. Scott, “high modernism” – a faith in order and measurable progress. He argues that this ideology, which falsely claims to have scientific foundations, often ignores local knowledge and lived experience, leading to disastrous consequences. He cites the example of monocrop plantations, in contrast to multi-crop plantations, to show how top-down planning can fail to account for regional diversity in agriculture.

The consequence of that failure is the destruction of soil and livelihoods in the long-term. This is the same risk now facing knowledge-work in the face of gAIs.

Why is high modernism a problem when designing AI? Wouldn’t it be great to have a one-stop shop, an Amazon for our intellectual needs? As it happens, Amazon offers a clear example of the problems resulting from a lack of diverse options. Such a business model yields only increased standardisation and not sustainability or craft, and consequently everyone has the same cheap, cookie-cutter products, while the local small-town shops die a slow death by a thousand clicks.

What do giant AIs abstract away?

Like the death of local stores, the rise of gAIs could lead to the loss of languages, which will hurt the diversity of our very thoughts. The risk of such language loss is due to the bias induced by models trained only on the languages that already populate the Internet, which is a lot of English (~60%). There are other ways in which a model is likely to be biased, including on religion (more websites preach Christianity than they do other religions, e.g.), sex and race.

At the same time, LLMs are unreasonably effective at providing intelligible responses. Science-fiction author Ted Chiang suggests that this is true because ChatGPT is a “blurry JPEG” of the internet, but a more apt analogy might be that of an atlas.

An atlas is a great way of seeing the whole world in snapshots. However, an atlas lacks multi-dimensionality. For example, I asked ChatGPT why it is a bad idea to plant eucalyptus trees in the West Medinipur district. It gave me several reasons why monoculture plantations are bad – but failed to supply the real reason people in the area opposed it: a monoculture plantation reduced the food they could gather.

That kind of local knowledge only comes from experience. We can call that ‘knowledge of the territory’. This knowledge is abstracted away by gAIs in favour of the atlas view of all that is present on the internet. The territory can only be captured by the people doing the tasks that gAIs are trying to replace.

Can diversity help?

A part of the failure to capture the territory is demonstrated in gAIs’ lack of understanding. If you are careful about what you ask them for (a feat called “prompt engineering” – an example of a technology warping the ecology of our behaviour), they can fashion impressive answers. But ask it the same question in a slightly different way and you can get complete rubbish. This trend has prompted computer scientists to call these systems stochastic parrots – that is, systems that can mimic language but are random in their behaviour.

Positive research directions exist as well. For example, BLOOM is an open-source LLM developed by scientists with public money and with extensive filtering of the training data. This model is also multilingual, including 10 Indian languages, plus an active ethics team that regularly updates the licence for use. 

There are multiple ways to thwart the risks posed by gAIs. One is to artificially slow the rate of progress in AI commercialisation to allow time for democratic inputs. (Tens of thousands of researchers have already signed a petition to this effect).

Another is to ensure there are diverse models being developed. ‘Diversity’ here implies multiple solutions to the same question, like independent cartographers preparing different atlases with different incentives: some will focus on the flora while others on the fauna. The research on diversity suggests that the more time passes before reaching a common solution, the better the outcome. And a better outcome is critical when dealing with the stakes involved in artificial general intelligence – an area of study in which a third of researchers believe it can lead to a nuclear-level catastrophe.

How can simply ‘assisting and augmenting’ be harmful?

Just to be clear, I wrote this article, not ChatGPT. But I wanted to check what it would say…

“Q: Write a response to the preceding text as ChatGPT.

A: As ChatGPT, I’m a tool meant to assist and augment human capabilities, not replace them; my goal is to understand and respond to your prompts, not to replace the richness and diversity of human knowledge and experience.”

Yet as the writer George Zarkadakis put it, “Every augmentation is also an amputation”. ChatGPT & co. may “assist and augment” but at the same time, they reduce the diversity of thoughts, solutions, and knowledge, and they currently do so without the inputs of the people meant to use them.

Source link

#lose #work #giant #ChatGPT

Are programmes like ChatGPT bringing useful change or unknown chaos?


The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

Since ChatGPT exploded onto the scene in November 2022, many have contemplated how it might transform life, jobs and education as we know it for better or worse. 

Many, including us, are excited by the benefits that digital technology can bring to consumers. 

However, our experience in testing digital products and services, shaping digital policy, and diving into consumers’ perspectives on IoT, AI, data and platforms, means the eyes of all experts are wide open to also the challenges of disruptive digitalisation.

After all, consumers should be able to use the best technology in the way they want to and not have to compromise on safety and trust.

What’s in it for consumers?

There’s plenty of positive potential in new, generative technologies like ChatGPT, including producing written content, creating training materials for medical students or writing and debugging code.

We’ve already seen people innovate consumer tasks with ChatGPT — for example, using it to write a successful parking fine appeal. 

And when asked, ChatGPT had its own ideas of what it could do for consumers.

“I can help compare prices and specifications of different products, answer questions about product maintenance and warranties, and provide information on return and exchange policies…. I can also help consumers understand technical terms and product specifications, making it easier for them to make informed decisions,” it told us when we asked the question.

Looking at this, it might make you wonder if this level of service from a machine might lead to experts in all fields, including ours, becoming obsolete.

However, the rollout of ChatGPT and similar technologies has shown it still has a problem with accuracy which is, in turn, a problem for its users.

The search for truth

Let’s start by looking at the challenge of accuracy and truth in a large language model like ChatGPT.

ChatGPT has started to disrupt internet search through a rollout of the technology in Microsoft’s Bing search engine. 

With ChatGPT-enabled search, results appear not as a list of links but as a neat summary of the information within the links, presented in a conversational style. 

The answers can be finessed through more questions, just as if you were chatting to a friend or advisor.

This could be really helpful for a request like “can you show me the most lightweight tent that would fit into a 20-litre bike pannier”. 

Results like these would be easy to verify, and perhaps more crucially, if they turn out to be wrong, they would not pose a major risk to a person.

However, it’s a different story when the information that is “wrong” or “inaccurate” carries a material risk of harm — for example, health or financial advice or deliberate misinformation that could cause wider social problems.

It’s convincing, but is it reliable?

The problem is that technologies like ChatGPT are very good at writing convincing answers. 

But OpenAI have been clear that ChatGPT has not been designed to write text that is true. 

It is trained to predict the next word and create answers that sound highly plausible — which means that a misleading or untrue answer could look just as convincing as a reliable, true one.

The speedy delivery of convincing, plausible untruths through tools like ChatGPT becomes a critical problem in the hands of users whose sole purpose is to mislead, deceive and defraud.

Large language models like ChatGPT can be trained to learn different tones and styles, which makes them ripe for exploitation. 

Convincing phishing emails suddenly become much easier to compose, and persuasive but misleading visuals are quicker to create. 

Scams and frauds could become ever more sophisticated and disinformation ever harder to distinguish. Both could become immune to the defences we have built up.

We need to learn how to get the best of ChatGPT

Even in focusing on just one aspect of ChatGPT, those of us involved in protecting consumers in Europe and worldwide have examined multiple layers of different consequences that this advanced technology could create once it reaches users’ hands.

People in our field are indeed already working together with businesses, digital rights groups and research centres to start to unpick the complexities of such a disruptive technology.

OpenAI have put safeguards around the use of the technology, but other rollouts of similar products may not. 

Strong, future-focused governance and rules are needed to make sure that consumers can make the most of the technology with confidence. 

As the AI Act develops, Euroconsumers’ organisations are working closely with BEUC to secure consumer rights to privacy, safety and fairness in the legislative frameworks. 

In the future, we will be ready to defend consumers in court for wrongdoing caused by AI systems.

True innovation still has human interests at its core

However, there are plenty of reasons to look at the tools of the future, like ChatGPT, with optimism. 

We believe that innovation can be a lever of social and economic development by shaping markets that work better for consumers. 

However, true innovation needs everyone’s input and only happens when tangible benefits are felt in the lives of as many people as possible.

But we are only at the beginning of what is turning out to be an intriguing experience with these interactive, generative technologies.

It may be too early for a definitive last word, but one thing is absolutely sure: despite — and perhaps even because of ChatGPT — there will still be plenty of need for consumer protection by actual humans.  

Marco Pierani is the Director of Public Affairs and Media Relations, and Els Bruggeman serves as Head of Advocacy and Enforcement at Euroconsumers, a group of five consumer organisations in Belgium, Italy, Brazil, Spain and Portugal.

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

Source link

#programmes #ChatGPT #bringing #change #unknown #chaos

ChatGPT frenzy sweeps China as firms scramble for home-grown options

Microsoft-backed OpenAI has kept its hit ChatGPT app off-limits to users in China, but the app is attracting huge interest in the country, with firms rushing to integrate the technology into their products and launch rival solutions.

While residents in the country are unable to create OpenAI accounts to access the artificial intelligence-powered (AI) chatbot, virtual private networks and foreign phone numbers are helping some bypass those restrictions.

At the same time, the OpenAI models behind the ChatGPT programme, which can write essays, recipes and complex computer code, are relatively accessible in China and increasingly being incorporated into Chinese consumer technology applications from social networks to online shopping.

The tool’s surging popularity is rapidly raising awareness in China about how advanced U.S. AI is and, according to analysts, just how far behind tech firms in the world’s second-largest economy are as they scramble to catch up.


Also Read | ChatGPT and the future of journalism

“There is huge excitement around ChatGPT. Unlike the metaverse which faces huge difficulty in finding real-life application, ChatGPT has suddenly helped us achieve human-computer interaction,” said Ding Daoshi, director of Beijing-based internet consultancy Sootoo. “The changes it will bring about are more immediate, more direct and way quicker.”

OpenAI or ChatGPT itself is not blocked by Chinese authorities but OpenAI does not allow users in mainland China, Hong Kong, Iran, Russia and parts of Africa to sign up.

OpenAI told Reuters it is working to make its services more widely available.

“While we would like to make our technology available everywhere, conditions in certain countries make it difficult or impossible for us to do so in a way that is consistent with our mission,” the San Francisco-based firm said in an emailed statement. “We are currently working to increase the number of locations where we can provide safe and beneficial access to our tools.”

In December, Tencent Holdings’ WeChat, China’s biggest messaging app, shut several ChatGPT-related programmes that had appeared on the network, according to local media reports, but they have continued to spring up.

Dozens of bots rigged to ChatGPT technology have emerged on WeChat, with hobbyists using it to make programmes or automated accounts that can interact with users. At least one account charges users a fee of ¥9.99 ($1.47) to ask 20 questions.

Mr. Tencent did not respond to Reuters‘ request for comments.

ChatGPT supports Chinese language interaction and is highly capable of conversing in Chinese, which has helped drive its unofficial adoption in the country.

Chinese firms also use proxy tools or existing partnerships with Microsoft, which is investing billions of dollars in its OpenAI, to access tools that allow them to embed AI technology into their products.

Shenzhen-based Proximai in December introduced a virtual character into its 3D game-like social app who used ChatGPT’s underlying tech to converse. Beijing-based entertainment software company Kunlun Tech plans to incorporate ChatGPT in its web browser Opera.


Also Read | Analysis | Can ChatGPT write a scientific paper? Should it?

SleekFlow, a Tiger Global-backed startup in Hong Kong, said it was integrating the AI into its customer relations messaging tools. “We have clients all over the world,” Henson Tsai, SleekFlow’s founder, said. “Among other things, ChatGPT does excellent translations, sometimes better than other solutions available on the market.”

Censorship

Reuters‘ tests of ChatGPT indicate that the chatbot is not averse to questions that would be sensitive in mainland China. Asked for its thoughts on Chinese President Xi Jinping, for instance, it responded it does not have personal opinions and presented a range of views.

But some of its proxy bots on WeChat have blacklisted such terms, according to other Reuters checks, complying with China’s heavy censorship of its cyberspace. When asked the same question about Xi on one ChatGPT proxy bot, it responded by saying that the conversation violated rules.

To comply with Chinese rules, Proximai’s founder Will Duan said his platform would filter information presented to users during their interaction with ChatGPT.

Chinese regulators, which last year introduced rules to strengthen governance of “deepfake” technology, have not commented on ChatGPT. However, state media this week warned about stock market risks amid a frenzy over local ChatGPT-concept stocks.

The Cyberspace Administration of China, the internet regulator, did not respond to Reuters‘ request for comment.

“With the regulations released last year, the Chinese government is saying: we already see this technology coming and we want to be ahead of the curve,” said Rogier Creemers, an assistant professor at Leiden University.

“I fully expect the great majority of the AI-generated content to be non-political.”

Chinese rivals

Joining the buzz have been some of the country’s largest tech giants such as Baidu and Alibaba who gave updates this week on AI models they have been working on, prompting their shares to zoom.

Baidu said this week it would complete internal testing of its “Ernie Bot” in March, a big AI model the search firm has been working on since 2019.

On Wednesday, Alibaba said that its research institute Damo Academy was also testing a ChatGPT-style tool.

Mr. Duan, whose company has been using a Baidu AI chatbot named Plato for natural language processing, said ChatGPT was at least a generation more powerful than China’s current NLP solutions, though it was weaker in some areas, such as understanding conversation context.

Mr. Baidu did not reply to Reuters‘ request for comments.

Access to OpenAI’s GPT-3, or Generative Pre-trained Transformer, was first launched in 2020, an update of which is the backbone of ChatGPT.

Mr. Duan said potential long-term compliance risks mean Chinese companies would most likely replace ChatGPT with a local alternative, if they could match the U.S.-developed product’s functionality.

“So we actually hope that there can be alternative solutions in China which we can directly use… it may handle Chinese even better, and it can also better comply with regulations,” he said.

Source link

#ChatGPT #frenzy #sweeps #China #firms #scramble #homegrown #options

Microsoft Is Adding ChatGPT-Like Technology to Bing, Edge Browser

Microsoft is fusing ChatGPT-like technology into its search engine Bing, transforming an internet service that now trails far behind Google into a new way of communicating with artificial intelligence.

The revamping of Microsoft’s second-place search engine could give the software giant a head start against other tech companies in capitalising on the worldwide excitement surrounding ChatGPT, a tool that’s awakened millions of people to the possibilities of the latest AI technology.

Along with adding it to Bing, Microsoft is also integrating the chatbot technology into its Edge browser. Microsoft announced the new technology at an event Tuesday at its headquarters in Redmond, Washington.

“Think of it as faster, more accurate, more powerful” than ChatGPT, built with technology from ChatGPT-maker OpenAI but tuned for search queries, said Yusuf Mehdi, a Microsoft executive who leads its consumer division, in an interview.

A public preview of the new Bing launched Tuesday for desktop users who sign up for it, but Mehdi said the technology will scale to millions of users in coming weeks and will eventually come to the smartphone apps for Bing and Edge. For now, everyone can try a limited number of queries, he said.

The strengthening partnership with OpenAI has been years in the making, starting with a $1 billion (roughly Rs. 8,300 crore) investment from Microsoft in 2019 that led to the development of a powerful supercomputer specifically built to train the San Francisco startup’s AI models.

While it’s not always factual or logical, ChatGPT’s mastery of language and grammar comes from having ingested a huge trove of digitised books, Wikipedia entries, instruction manuals, newspapers and other online writings.

Microsoft CEO Satya Nadella said Tuesday that new AI advances are “going to reshape every software category we know,” including search, much like earlier innovations in personal computers and cloud computing. He said it is important to develop AI “with human preferences and societal norms and you’re not going to do that in a lab. You have to do that out in the world.”

The shift to making search engines more conversational — able to confidently answer questions rather than offering links to other websites — could change the advertising-fuelled search business, but also poses risks if the AI systems don’t get their facts right. Their opaqueness also makes it hard to source back to the original human-made images and texts they’ve effectively memorised, though the new Bing includes annotations that reference the source data.

“Bing is powered by AI, so surprises and mistakes are possible,” is a message that appears at the bottom of the preview version of Bing’s new homepage. “Make sure to check the facts.”

As an example of how it works, Mehdi asked the new Bing to compare the most influential Mexican painters and it provided typical search results, but also, on the right side of the page, compiled a fact box summarising details about Diego Rivera, Frida Kahlo and Jose Clemente Orozco. In another example, he quizzed it on 1990s-era rap, showing its ability to distinguish between the song “Jump” by Kris Kross and “Jump Around” by House of Pain. And he used it to show how it could plan a vacation or help with shopping.

Gartner analyst Jason Wong said new technological advancements will mitigate what led to Microsoft’s disastrous 2016 launch of the experimental chatbot Tay, which users trained to spout racist and sexist remarks. But Wong said “reputational risks will still be at the forefront” for Microsoft if Bing produces answers with low accuracy or so-called AI “hallucinations” that mix and conflate data.

Google has been cautious about such moves. But in response to pressure over ChatGPT’s popularity, Google CEO Sundar Pichai on Monday announced a new conversational service named Bard that will be available exclusively to a group of “trusted testers” before being widely released later this year.

Wong said Google was caught off-guard with the success of ChatGPT but still has the advantage over Microsoft in consumer-facing technology, while Microsoft has the edge in selling its products to businesses.

Chinese tech giant Baidu also this week announced a similar search chatbot coming later this year, according to Chinese media. Other tech rivals such as Facebook parent Meta and Amazon have been researching similar technology, but Microsoft’s latest moves aim to position it at the centre of the ChatGPT zeitgeist.

Microsoft disclosed in January that it was pouring billions more dollars into OpenAI as it looks to fuse the technology behind ChatGPT, the image-generator DALL-E and other OpenAI innovations into an array of Microsoft products tied to its cloud computing platform and its Office suite of workplace products like email and spreadsheets.

The most surprising might be the integration with Bing, which is the second-place search engine in many markets but has never come close to challenging Google’s dominant position.

Bing launched in 2009 as a rebranding of Microsoft’s earlier search engines and was run for a time by Nadella, years before he took over as CEO. Its significance was boosted when Yahoo and Microsoft signed a deal for Bing to power Yahoo’s search engine, giving Microsoft access to Yahoo’s greater search share. Similar deals infused Bing into the search features for devices made by other companies, though users wouldn’t necessarily know that Microsoft was powering their searches.

By making it a destination for ChatGPT-like conversations, Microsoft could invite more users to give Bing a try, though the new version so far is limited to desktops and doesn’t yet have an interface for smartphones — where most people now access the internet.

On the surface, at least, a Bing integration seems far different from what OpenAI has in mind for its technology. Appearing at Microsoft’s event, OpenAI CEO Sam Altman said the “the new Bing experience looks fantastic” and is based in part on learnings from its GPT line of large language models. He said a key reason for his startup’s Microsoft partnership is to help get OpenAI technology “into the hands of millions of people.”

OpenAI has long voiced an ambitious vision for safely guiding what’s known as AGI, or artificial general intelligence, a not-yet-realised concept that harkens back to ideas from science fiction about human-like machines. OpenAI’s website describes AGI as “highly autonomous systems that outperform humans at most economically valuable work.”

OpenAI started out as a nonprofit research laboratory when it launched in December 2015 with backing from Tesla CEO Elon Musk and others. Its stated aims were to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

That changed in 2018 when it incorporated a for-profit business Open AI LP, and shifted nearly all its staff into the business, not long after releasing its first generation of the GPT model for generating human-like paragraphs of readable text.

OpenAI’s other products include the image-generator DALL-E, first released in 2021, the computer programming assistant Codex and the speech recognition tool Whisper.


Samsung’s Galaxy S23 series of smartphones was launched earlier this week and the South Korean firm’s high-end handsets have seen a few upgrades across all three models. What about the increase in pricing? We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Source link

#Microsoft #Adding #ChatGPTLike #Technology #Bing #Edge #Browser