Amazon is using generative AI to drive more same-day shipping using smarter robots and better routes

For years, Amazon has set the bar for package delivery. When Prime launched in 2005, two-day shipping was unheard of. By 2019, one-day shipping was standard for millions of items. Now, the retail giant is turning to generative AI to drive more same-day shipping.

Amazon is using the technology to optimize delivery routes, make more intelligent warehouse robots, create more-ergonomic environments for employees and better predict where to stock new items, said Steve Armato, Amazon’s vice president of transportation technology and services.

During an exclusive tour of Amazon’s largest California sort center, located in Tracy, Armato told CNBC that 60% of Prime orders in March were delivered the same day or next day in the top 60 metropolitan areas in the U.S. Amazon is betting on generative AI to increase that figure.  

“It seems subtle, but at this scale, getting just one more product in the right spot means that it’s shipping less distance when you order it,” Armato said in an interview at the warehouse.

In 2020, Amazon began developing models for demand forecasting and supply chain optimization using transformer architecture, the backbones of what we know today as generative AI. 

“Generative AI is the next big evolution in technology,” Armato said. “It’s remarkable, and we’re already applying it in very practical ways across our operations.”

But not all the changes that generative AI may bring to the e-commerce giant are positive. There are concerns about the high-energy needs of generative AI and about its ability to enable robots to replace Amazon’s human workforce, analysts told CNBC.

Robots and new roles 

The number of Amazon warehouse robots grew from 350,000 in 2021 to more than 750,000 in 2023, according to the company.

Amazon began adding AI transformer models to its warehouse delivery robots in 2022 so the machines can dash around each other more intelligently. CNBC watched hundreds of them move in a coordinated grid in the warehouse. Armato calls this “the dance floor.”

“Some of the two-day deliveries might stand aside, let the robot with a next-day delivery go on its mission first and take a straight line to its destination,” Armato said. 

Hundreds of robots dash around each other with the help of generative AI at Amazon’s largest California sort center in Tracy, California, July 31, 2024.

Lisa Setyon

While these robots navigate using a series of QR codes, Amazon’s next generation of drive units, called Proteus, are fully autonomous, the company said. 

“They’re using generative AI and computer vision to avoid obstacles and find the right place to stop,” Armato said. 

As part of the company’s AI strategy, Amazon in August struck a deal with AI startup Covariant. Amazon hired the startup’s founders and licensed its models that help robots handle a wider range of physical objects. Amazon is also developing a bipedal robot called Digit that can grasp and handle items in a humanoid way.

CNBC saw a row of 20 robotic “Robin” arms that use computer vision to determine how much pressure to use when picking up various package shapes and sizes. Amazon said generative AI teaches the arms how to handle products they’ve never seen before based on data from similar products in Amazon’s vast catalog.

A similar model is used to better assess damaged items and keep them from shipping out. Amazon’s AI is three times better at identifying damaged products than humans are, the company said.

Introducing more robotics with generative AI without replacing human workers is a balancing act for Amazon, said Tom Forte, senior equity analyst at the Maxim Group.

“How can they implement automation to improve efficiency and manage labor expenses, but how can they do it in a way that complements their use of humans and doesn’t replace them?” Forte said.

Rather than replacing workers, the robots are reducing the burden on employees and creating new roles, Armato said. Amazon said it plans to spend $1.2 billion to upskill more than 300,000 employees by the end of 2025 as generative AI and robotics change the company’s processes. 

“Someone needs to maintain [the robot] if it breaks down,” Armato said. “Or if something does get dropped on the dance floor, we have a process and special training to go clean that up. And so each of those creates new categories of jobs, some of which have higher earnings potential.”

Amazon has faced scrutiny in recent years over its workplace injury record, with federal citations for safety violations and a yearlong Senate probe that found that Amazon’s big annual sale, Prime Day, was a “major” cause of worker injuries. Amazon appealed the citations and said the report ignores progress it’s made. 

Many of Amazon’s robots move tall bins of items to workstations where employees pick and pack them, which reduces how much humans have to walk, Armato said. AI is also reducing the need for workers to reach and bend, he said.

“One algorithmic improvement is to take our faster-selling products and place those on the shelves at waist height,” Armato said. “That’s your ergonomic power zone.”

Robotic drive units bring tall stacks of items to workstations for picking and packing at an Amazon same-day center in Richmond, California, Aug. 31, 2024.

Katie Tarasov

Predicting orders and routes 

With all those robots and workers, Amazon delivered more than 2 billion items the same day or next day in the first quarter of 2024, according to the company.  

Amazon has always used algorithms to predict how much of what inventory is needed, when and where. The company said it’s using generative AI to predict where best to place items it hasn’t previously sold. 

“When we place a product in the right place ahead of time, before you click buy, it’s traveling less distance, which is a win for speed and sustainability,” Armato said.

Amazon Web Services has data centers filled with servers running AI workloads that give the company an edge over its retail rivals because it can train its AI in-house. As an early online-only retailer, Amazon got a head start on collecting mass aggregate data on shopping behavior and delivery logistics. Amazon is now using that trove of data to create AI models for use in everything from supply chain optimization to warehouse robotics, according to the company.

“It’s not that Walmart and Target and Costco and others don’t have their own reams of data, but they’re looking at things a little bit differently, and they have much older systems,” said Sucharita Kodali, retail analyst at Forrester Research.  

How eco-friendly generative AI will be in the long run is unclear. That’s because training and running generative AI is a carbon-intensive process, and by 2027, AI servers worldwide are projected to use as much power every year as Sweden or the Netherlands.  

That’s in conflict with Amazon’s 2019 commitment to reach net-zero carbon by 2040.

The company claims that the use of AI is helping cut down the carbon footprint of package delivery. Amazon is reducing carbon by using more than 20 machine learning models to improve mapping for its vast network of 390,000 delivery drivers, predicting road closures and choosing more efficient routes, the company said. 

Beyond its warehouses, Amazon has also introduced generative AI to help its sellers and shoppers.

The company’s new Amazon Personalize AI tool generates hyper-personalized product recommendations. Sellers can also use generative AI to write highly targeted product descriptions or generate images of their products in different “seasonal and lifestyle” settings

For shoppers, Amazon in 2023 began populating its website with AI-generated summaries of product reviews, and in February, the company launched a generative-AI powered conversational shopping assistant called Rufus. 

Additionally, Amazon said, it has invested $4 billion in AI startup Anthropic, which makes chatbot Claude, a competitor to OpenAI’s ChatGPT. Amazon also makes its own AI-focused microchips and its own generative AI tools for developers, which it also uses in operations, the company said.

Whether Amazon’s huge investment in generative AI will translate to profits remains an open question. 

“I have yet to see huge lift in anybody’s retail business due to generative AI, including Amazon,” Kodali said. “I think a lot of their biggest impact has happened because of the earlier investments, not necessarily some of these more recent investments.”

Watch the video for more on how Amazon is using AI.

Source link

#Amazon #generative #drive #sameday #shipping #smarter #robots #routes

Battling the echo chamber: Osavul takes on Russian disinformation

Russian disinformation adapts to different audiences, from young people to political groups. Osavul’s co-founder explains how to recognise these tactics.

ADVERTISEMENT

It is the largest investment in a company exposing disinformation in Europe. The Ukrainian company Osavul has received 3 million dollars (2.78 million euros).

In its latest press release, the company introduces its three European investors: 42CAP, a German venture capital firm, u.ventures, a US-government-backed fund, co-financing Ukrainian and Moldovan projects, and the SMRK Venture Capital Fund, which already assisted with fundraising last year.

Osavul’s co-founder, Dmytro Bilash, never intended to work in security. He comes from the business world. He analysed data from companies to create advertising. In 2022, everything changed. Russia launched its full-scale invasion on Ukraine.

Bilash’s flat in Kyiv was destroyed by two Russian missiles. He felt compelled to act and wanted to help. A request for support from the Ukrainian government led to the launch of Osavul —a media intelligence organisation that uses artificial intelligence (AI) to expose and combat disinformation.

What began as a small project in 2022 and was later funded through crowdfunding and donations is now involved in EU-funded and NATO projects and attracts millions in funding. Osavul’s headquarters is now in Delaware in the US, where 28 specialists worldwide work for them. Over 500 analysts use Osavul’s data.

Dmytro Bilash, co-founder of Osavul, speaks to Euronews about Russian disinformation. He explains why it has gained such traction in Germany, and offers tips on how fake news can be debunked.

Euronews: Why did you start Osavul?

Bilash: We wanted to help. The full-scale invasion has changed the lives of all Ukrainians. We offered our expertise as analysts, and eventually we were approached by people from the government. The problem of disinformation is so massive, no private company – neither from Europe nor from the USA – could deal with this mass of mis- and disinformation. The scale was much smaller before the full-scale invasion. We tried to develop something that could deal with this modern, new threat of disinformation.

I had previously worked in advertising, which wasn’t meaningful. I wanted to do something that had meaning, something that was necessary.

Euronews: How did you go from advertising to analysing disinformation?

Bilash: Yes, that was the problem. There was no ideal solution. We knew how to analyse data, publicly available data, and we used that knowledge to develop something. It’s now a pretty sophisticated technology.

Euronews: How do you work now? How does an analysis like that work?

Bilash: There are a few steps to consider. We collect data from websites, from open sources, for example more than 10 million messages a day. Our AI analyses this data to identify key narratives, topics discussed, and opinions expressed by media, businesses, political organisations, or opinion leaders.

For example, if Russia launches a campaign in a European country to interfere with elections or create discord across Europe using economic issues, we can detect it and highlight the specific narratives used in these attacks.

We use three types of tools for this: Open source tools, commercial tools that are purchased and our own tools.

ADVERTISEMENT

Once the AI model has filtered out the main ideas and the fake news, we need to understand: Who is spreading this misinformation? Is it a public institution, propaganda channels or websites? What impact does this have? Do the false news stay in one channel or do they spread further, creating a larger echo chamber?

We collect all this information and make it available to decision-makers. What we ultimately do: We want to provide the decision-makers, the legislators, security institutions with the information so that they can ultimately take action. If laws are being broken, they can take action against it.

Euronews: Can you name an example of what the main narratives of Russian disinformation are?

Bilash: It is important to understand that disinformation patterns depend very much on the culture and the places where they are spread. The patterns are adapted.

ADVERTISEMENT

They therefore depend on the group that is to be addressed: Young people, right or left-wing leaning people, Russian-speaking or German-speaking?

One of the main narratives of Russian disinformation is that the German economy is weakening.

An example: A furniture company goes bankrupt, it doesn’t matter whether it exists or not, it just has to seem real.

In Germany, real news is often placed in false contexts in order to show that the German economy or the state, is weakening. Ultimately, Russia’s aim with its disinformation in Germany is to weaken support for Ukraine.

ADVERTISEMENT

Euronews: Why is Germany a big target for Russian disinformation campaigns?

Bilash: I see several reasons for this. First, Germany is an important country. It is the largest economy in Europe – that is the obvious reason. The large Russian-speaking community in Germany makes it easier for Russia, but it is not absolutely necessary. 

Another reason is that Telegram, as a messaging service, is much more widespread in Germany than in other Western European countries. A lot of disinformation is spread via X, TikTok and Telegram – these media seem less strongly controlled, than platforms by Meta, such as Facebook, Instagram or WhatsApp. 

In addition, Russia can utilise structures that were established before the war due to the strong connection between Russia and Germany. The more freedom of expression is valued in culture, the more breeding ground there is for disinformation.

ADVERTISEMENT

Euronews: How can ‘normal’ people recognise disinformation?

Bilash: When I see something on social media, I try to track my feelings: If a post, a video triggers a very strong feeling in me, then I become vigilant and ask myself: why? So the first warning signal with fake news is that strong feelings are being triggered. The second is: the sender, the source. Where did I get the information from? From a friend I trust or is the information from a random X account that usually posts cat and dog videos and suddenly shares a strong political opinion. Sometimes something like that is enough to understand that something is not fully trustworthy.

And of course, disinformation increases the value of good journalism, making it easy to debunk and verify disinformation.

Euronews: Are there Russian disinformation narratives that are more explicitly disseminated in Ukraine?

ADVERTISEMENT

Bilash: In Ukraine itself, the situation is somewhat different from Europe or the rest of the world. Ukrainians have generally become much more vigilant.

The strategies here are often very closely linked to military events. This means that Russia makes ‘conquests’ and military victories much bigger than they are, it hypes them up.

For example, if a village near the frontline has been brought under Russian control, it may no longer exist or there may no longer be any people living there, Russian propaganda celebrates it as a great victory for the Russian army. The aim of such campaigns is to disrupt the Ukrainians’ sense of unity – both inside and outside Ukraine. To weaken and destabilise mutual support among them.

Euronews: Is there a common strategy that Russia uses for its disinformation campaigns?

ADVERTISEMENT

Bilash: When Russian propaganda talks about a nuclear threat, it is a sign – either for the domestic population or the international community. Remember the bombing of the maternity clinic in Mariupol. Something we call “information alibi” was used there.

Even before the attack, information was spread that there was a Ukrainian battalion in the hospital. When the attack took place, the disinformation campaign was easier to spread, because the false information about the battalion in the hospital was already out there, the supposed reason for the attack had already been established.

Spreading the truth is easy: Something happened, you report it. If you’re trying to spread a certain fake narrative, you have to stick to it, you have to prepare it. It’s like a machine into which resources are channelled to spread these false narratives.

Source link

#Battling #echo #chamber #Osavul #takes #Russian #disinformation

AI may not steal many jobs after all, it may just make workers more efficient

Alorica, a company in Irvine, California, that runs customer-service centers around the world, has introduced an artificial intelligence translation tool that lets its representatives talk with customers who speak 200 different languages and 75 dialects.

So an Alorica representative who speaks, say, only Spanish can field a complaint about a balky printer or an incorrect bank statement from a Cantonese speaker in Hong Kong. Alorica wouldn’t need to hire a rep who speaks Cantonese.

Such is the power of AI. And, potentially, the threat: Perhaps companies won’t need as many employees — and will slash some jobs — if chatbots can handle the workload instead. But the thing is, Alorica isn’t cutting jobs. It’s still hiring aggressively.

The experience at Alorica — and at other companies, including furniture retailer IKEA — suggests that AI may not prove to be the job killer that many people fear. Instead, the technology might turn out to be more like breakthroughs of the past — the steam engine, electricity, the internet: That is, eliminate some jobs while creating others. And probably making workers more productive in general, to the eventual benefit of themselves, their employers and the economy.

Nick Bunker, an economist at the Indeed Hiring Lab, said he thinks AI “will affect many, many jobs — maybe every job indirectly to some extent. But I don’t think it’s going to lead to, say, mass unemployment. We have seen other big technological events in our history, and those didn’t lead to a large rise in unemployment. Technology destroys but also creates. There will be new jobs that come about.’’

At its core, artificial intelligence empowers machines to perform tasks previously thought to require human intelligence. The technology has existed in early versions for decades, having emerged with a problem-solving computer program, the Logic Theorist, built in the 1950s at what’s now Carnegie Mellon University. More recently, think of voice assistants like Siri and Alexa. Or IBM’s chess-playing computer, Deep Blue, which managed to beat the world champion Garry Kasparov in 1997.

AI burst into public consciousness in 2022 when OpenAI introduced ChatGPT, the generative AI tool that can conduct conversations, write computer code, compose music, craft essays and supply endless streams of information. The arrival of generative AI has raised worries that chatbots will replace freelance writers, editors, coders, telemarketers, customer service reps, paralegals and many more.

“AI is going to eliminate a lot of current jobs, and this is going to change the way that a lot of current jobs function,” Sam Altman, the CEO of OpenAI, said in a discussion at the Massachusetts Institute of Technology in May.

Yet the widespread assumption that AI chatbots will inevitably replace service workers, the way physical robots took many factory and warehouse jobs, isn’t becoming reality in any widespread way — not yet, anyway. And maybe it never will.

The White House Council of Economic Advisers said last month that it found “little evidence that AI will negatively impact overall employment.’’ The advisers noted that history shows technology typically makes companies more productive, speeding economic growth and creating new types of jobs in unexpected ways.

They cited a study this year led by David Autor, a leading MIT economist: It concluded that 60% of the jobs Americans held in 2018 didn’t even exist in 1940, having been created by technologies that emerged only later.

The outplacement firm Challenger, Gray & Christmas, which tracks job cuts, said it has yet to see much evidence of layoffs that can be attributed to labor-saving AI.

“I don’t think we’ve started seeing companies saying they’ve saved lots of money or cut jobs they no longer need because of this,’’ said Andy Challenger, who leads the firm’s sales team. “That may come in the future. But it hasn’t played out yet.’’

At the same time, the fear that AI poses a serious threat to some categories of jobs isn’t unfounded.

Consider Suumit Shah, an Indian entrepreneur who caused a uproar last year by boasting that he had replaced 90% of his customer support staff with a chatbot named Lina. The move at Shah’s company, Dukaan, which helps customers set up e-commerce sites, shrank the response time to an inquiry from 1 minute, 44 seconds to “instant.” It also cut the typical time needed to resolve problems from more than two hours to just over three minutes.

“It’s all about AI’s ability to handle complex queries with precision,” Mr. Shah said by email. The cost of providing customer support, he said, fell by 85%.

“Tough? Yes. Necessary? Absolutely,’’ Mr. hah posted on X.

Dukaan has expanded its use of AI to sales and analytics. “The tools,” Mr. Shah said, “keep growing more powerful.”

“It’s like upgrading from a Corolla to a Tesla,” he said. “What used to take hours now takes minutes. And the accuracy is on a whole new level.”

Similarly, researchers at Harvard Business School, the German Institute for Economic Research and London’s Imperial College Business School found in a study last year that job postings for writers, coders and artists tumbled within eight months of the arrival of ChatGPT.

A 2023 study by researchers at Princeton University, the University of Pennsylvania and New York University concluded that telemarketers and teachers of English and foreign languages held the jobs most exposed to ChatGPT-like language models. But being exposed to AI doesn’t necessarily mean losing your job to it. AI can also do the drudge work, freeing up people to do more creative tasks.

The Swedish furniture retailer IKEA, for example, introduced a customer-service chatbot in 2021 to handle simple inquiries. Instead of cutting jobs, IKEA retrained 8,500 customer-service workers to handle such tasks as advising customers on interior design and fielding complicated customer calls.

Chatbots can also be deployed to make workers more efficient, complementing their work rather than eliminating it. A study by Erik Brynjolfsson of Stanford University and Danielle Li and Lindsey Raymond of MIT tracked 5,200 customer-support agents at a Fortune 500 company who used a generative AI-based assistant. The AI tool provided valuable suggestions for handling customers. It also supplied links to relevant internal documents.

Those who used the chatbot, the study found, proved 14% more productive than colleagues who didn’t. They handled more calls and completed them faster. The biggest productivity gains — 34% — came from the least-experienced, least-skilled workers.

At an Alorica call center in Albuquerque, New Mexico, one customer-service rep had been struggling to gain access to the information she needed to quickly handle calls. After Alorica trained her to use AI tools, her “handle time’’ — how long it takes to resolve customer calls — fell in four months by an average of 14 minutes a call to just over seven minutes.

Over a period of six months, the AI tools helped one group of 850 Alorica reps reduce their average handle time to six minutes, from just over eight minutes. They can now field 10 calls an hour instead of eight — an additional 16 calls in an eight-hour day.

Alorica agents can use AI tools to quickly access information about the customers who call in — to check their order history, say, or determine whether they had called earlier and hung up in frustration.

Suppose, said Mike Clifton, Alorica’s co-CEO, a customer complains that she received the wrong product. The agent can “hit replace, and the product will be there tomorrow,” he said. ” ‘Anything else I can help you with? No?’ Click. Done. Thirty seconds in and out.’’

Now the company is beginning to use its Real-time Voice Language Translation tool, which lets customers and Alorica agents speak and hear each other in their own languages.

“It allows (Alorica reps) to handle every call they get,” said Rene Paiz, a vice president of customer service. “I don’t have to hire externally’’ just to find someone who speaks a specific language.

Yet Alorica isn’t cutting jobs. It continues to seek hires — increasingly, those who are comfortable with new technology.

“We are still actively hiring,’’ Ms. Paiz says. “We have a lot that needs to be done out there.’’

Source link

#steal #jobs #workers #efficient

AI may not steal many jobs after all, it may just make workers more efficient

Alorica, a company in Irvine, California, that runs customer-service centers around the world, has introduced an artificial intelligence translation tool that lets its representatives talk with customers who speak 200 different languages and 75 dialects.

So an Alorica representative who speaks, say, only Spanish can field a complaint about a balky printer or an incorrect bank statement from a Cantonese speaker in Hong Kong. Alorica wouldn’t need to hire a rep who speaks Cantonese.

Such is the power of AI. And, potentially, the threat: Perhaps companies won’t need as many employees — and will slash some jobs — if chatbots can handle the workload instead. But the thing is, Alorica isn’t cutting jobs. It’s still hiring aggressively.

The experience at Alorica — and at other companies, including furniture retailer IKEA — suggests that AI may not prove to be the job killer that many people fear. Instead, the technology might turn out to be more like breakthroughs of the past — the steam engine, electricity, the internet: That is, eliminate some jobs while creating others. And probably making workers more productive in general, to the eventual benefit of themselves, their employers and the economy.

Nick Bunker, an economist at the Indeed Hiring Lab, said he thinks AI “will affect many, many jobs — maybe every job indirectly to some extent. But I don’t think it’s going to lead to, say, mass unemployment. We have seen other big technological events in our history, and those didn’t lead to a large rise in unemployment. Technology destroys but also creates. There will be new jobs that come about.’’

At its core, artificial intelligence empowers machines to perform tasks previously thought to require human intelligence. The technology has existed in early versions for decades, having emerged with a problem-solving computer program, the Logic Theorist, built in the 1950s at what’s now Carnegie Mellon University. More recently, think of voice assistants like Siri and Alexa. Or IBM’s chess-playing computer, Deep Blue, which managed to beat the world champion Garry Kasparov in 1997.

AI burst into public consciousness in 2022 when OpenAI introduced ChatGPT, the generative AI tool that can conduct conversations, write computer code, compose music, craft essays and supply endless streams of information. The arrival of generative AI has raised worries that chatbots will replace freelance writers, editors, coders, telemarketers, customer service reps, paralegals and many more.

“AI is going to eliminate a lot of current jobs, and this is going to change the way that a lot of current jobs function,” Sam Altman, the CEO of OpenAI, said in a discussion at the Massachusetts Institute of Technology in May.

Yet the widespread assumption that AI chatbots will inevitably replace service workers, the way physical robots took many factory and warehouse jobs, isn’t becoming reality in any widespread way — not yet, anyway. And maybe it never will.

The White House Council of Economic Advisers said last month that it found “little evidence that AI will negatively impact overall employment.’’ The advisers noted that history shows technology typically makes companies more productive, speeding economic growth and creating new types of jobs in unexpected ways.

They cited a study this year led by David Autor, a leading MIT economist: It concluded that 60% of the jobs Americans held in 2018 didn’t even exist in 1940, having been created by technologies that emerged only later.

The outplacement firm Challenger, Gray & Christmas, which tracks job cuts, said it has yet to see much evidence of layoffs that can be attributed to labor-saving AI.

“I don’t think we’ve started seeing companies saying they’ve saved lots of money or cut jobs they no longer need because of this,’’ said Andy Challenger, who leads the firm’s sales team. “That may come in the future. But it hasn’t played out yet.’’

At the same time, the fear that AI poses a serious threat to some categories of jobs isn’t unfounded.

Consider Suumit Shah, an Indian entrepreneur who caused a uproar last year by boasting that he had replaced 90% of his customer support staff with a chatbot named Lina. The move at Shah’s company, Dukaan, which helps customers set up e-commerce sites, shrank the response time to an inquiry from 1 minute, 44 seconds to “instant.” It also cut the typical time needed to resolve problems from more than two hours to just over three minutes.

“It’s all about AI’s ability to handle complex queries with precision,” Mr. Shah said by email. The cost of providing customer support, he said, fell by 85%.

“Tough? Yes. Necessary? Absolutely,’’ Mr. hah posted on X.

Dukaan has expanded its use of AI to sales and analytics. “The tools,” Mr. Shah said, “keep growing more powerful.”

“It’s like upgrading from a Corolla to a Tesla,” he said. “What used to take hours now takes minutes. And the accuracy is on a whole new level.”

Similarly, researchers at Harvard Business School, the German Institute for Economic Research and London’s Imperial College Business School found in a study last year that job postings for writers, coders and artists tumbled within eight months of the arrival of ChatGPT.

A 2023 study by researchers at Princeton University, the University of Pennsylvania and New York University concluded that telemarketers and teachers of English and foreign languages held the jobs most exposed to ChatGPT-like language models. But being exposed to AI doesn’t necessarily mean losing your job to it. AI can also do the drudge work, freeing up people to do more creative tasks.

The Swedish furniture retailer IKEA, for example, introduced a customer-service chatbot in 2021 to handle simple inquiries. Instead of cutting jobs, IKEA retrained 8,500 customer-service workers to handle such tasks as advising customers on interior design and fielding complicated customer calls.

Chatbots can also be deployed to make workers more efficient, complementing their work rather than eliminating it. A study by Erik Brynjolfsson of Stanford University and Danielle Li and Lindsey Raymond of MIT tracked 5,200 customer-support agents at a Fortune 500 company who used a generative AI-based assistant. The AI tool provided valuable suggestions for handling customers. It also supplied links to relevant internal documents.

Those who used the chatbot, the study found, proved 14% more productive than colleagues who didn’t. They handled more calls and completed them faster. The biggest productivity gains — 34% — came from the least-experienced, least-skilled workers.

At an Alorica call center in Albuquerque, New Mexico, one customer-service rep had been struggling to gain access to the information she needed to quickly handle calls. After Alorica trained her to use AI tools, her “handle time’’ — how long it takes to resolve customer calls — fell in four months by an average of 14 minutes a call to just over seven minutes.

Over a period of six months, the AI tools helped one group of 850 Alorica reps reduce their average handle time to six minutes, from just over eight minutes. They can now field 10 calls an hour instead of eight — an additional 16 calls in an eight-hour day.

Alorica agents can use AI tools to quickly access information about the customers who call in — to check their order history, say, or determine whether they had called earlier and hung up in frustration.

Suppose, said Mike Clifton, Alorica’s co-CEO, a customer complains that she received the wrong product. The agent can “hit replace, and the product will be there tomorrow,” he said. ” ‘Anything else I can help you with? No?’ Click. Done. Thirty seconds in and out.’’

Now the company is beginning to use its Real-time Voice Language Translation tool, which lets customers and Alorica agents speak and hear each other in their own languages.

“It allows (Alorica reps) to handle every call they get,” said Rene Paiz, a vice president of customer service. “I don’t have to hire externally’’ just to find someone who speaks a specific language.

Yet Alorica isn’t cutting jobs. It continues to seek hires — increasingly, those who are comfortable with new technology.

“We are still actively hiring,’’ Ms. Paiz says. “We have a lot that needs to be done out there.’’

Source link

#steal #jobs #workers #efficient

Amazon Turns to Anthropic’s Claude for Alexa AI Revamp

Amazon’s revamped Alexa due for release in October ahead of the U.S. holiday season will be powered primarily by Anthropic’s Claude artificial intelligence models, rather than its own AI, five people familiar with the matter told Reuters.

Amazon plans to charge $5 to $10 a month for its new “Remarkable” version of Alexa as it will use powerful generative AI to answer complex queries, while still offering the “Classic” voice assistant for free, Reuters reported in June.

But initial versions of the new Alexa using in-house software simply struggled for words, sometimes taking six or seven seconds to acknowledge a prompt and reply, one of the people said.

That’s why Amazon turned to Claude, an AI chatbot developed by startup Anthropic, as it performed better than the online retail giant’s own AI models, the people said.

Reuters based this story upon interviews with five people with direct knowledge of the Alexa strategy. All declined to be named as they are not authorized to discuss non-public matters.

Alexa, accessed mainly through Amazon televisions and Echo devices, can set timers, play music, act as a central hub for smart home controls and answer one-off questions.

But Amazon’s attempts to convince users to shop through Alexa to generate more revenue have been mostly unsuccessful and the division remains unprofitable.

As a result, senior management has stressed that 2024 is a critical year for Alexa to finally demonstrate it can generate meaningful sales – and the revamped paid version is seen as a way both to do that and keep pace with rivals.

“Amazon uses many different technologies to power Alexa,” a company spokeswoman said in a statement in response to detailed Reuters questions for this story.

“When it comes to machine learning models, we start with those built by Amazon, but we have used, and will continue to use, a variety of different models – including (Amazon AI model) Titan and future Amazon models, as well as those from partners – to build the best experience for customers,” the spokeswoman said.

Anthropic, in which Amazon owns a minority stake, declined to comment for this story.

AI Partnerships

Amazon has typically eschewed relying on technology it hasn’t developed in-house so it can ensure it has full control of the user experience, data collection and direct relationships with customers.

But it would not be alone in turning to a partner to improve AI products. Microsoft and Apple, for example, have both struck partnerships with OpenAI to use its ChatGPT to power some of their products.

The release of the Remarkable Alexa, as it is known internally, is expected in October, with a preview of the new service coming during Amazon’s annual devices and services event typically held in September, the people said.

Amazon has not yet said, however, when it plans to hold its showcase event, which will be the first major public appearance of its new devices chief, Panos Panay, who was hired last year to replace long-time executive David Limp.

The wide release in late 2022 of ChatGPT, which gives full-sentence answers almost instantaneously to complicated queries, set off a frenzy of investing and corporate maneuvering to develop better AI software for a variety of functions, including image, video and voice services.

By comparison, Amazon’s decade-old Alexa appeared outmoded, Amazon workers have told Reuters.

While Amazon has a mantra of “working backwards from the customer” to come up with new services, some of the people said that within the Alexa group, the emphasis since last year has instead been on keeping up with competitors in the AI race.

Amazon workers also have expressed skepticism that customers would be willing to pay $60 to $120 per year for a service that’s free today – on top of the $139 many already pay for their Prime memberships.

Alexa Upgrades

As envisioned, the paid version of Alexa would carry on conversations with a user that build on prior questions and answers, the people with knowledge of the Alexa strategy said.

The upgraded Alexa is designed to allow users to seek shopping advice such as which clothes to buy for a vacation and to aggregate news stories, the people said. And it is meant to carry out more complicated requests, such as ordering food or drafting emails all from a single prompt.

Amazon hopes the new Alexa will also be a supercharged home automation hub, remembering customer preferences so that, say, morning alarms are set, or the television knows to record favorite shows even when a user forgets to, they said.

The company’s plans for Alexa, however, could be delayed or altered if the technology fails to meet certain internal benchmarks, the people said, without giving further details.

Bank of America analyst Justin Post estimated in June that there are roughly 100 million active Alexa users and that about 10% of those might opt for the paid version of Alexa. Assuming the low end of the monthly price range, that would bring in at least $600 million in annual sales.

Amazon says it has sold 500 million Alexa-enabled devices but does not disclose how many active users there are.

Announcing a deal to invest $4 billion in Anthropic in September last year, Amazon said its customers would gain early access to its technology. Reuters could not determine if Amazon would have to pay Anthropic additionally for the use of Claude in Alexa.

Amazon declined to discuss the details of its agreements with the startup. Alphabet’s Google has also invested at least $2 billion in Anthropic.

The retailer, along with Google, is facing a formal probe from the UK’s antitrust regulator over the Anthropic deal and its impact on competition. It announced an initial investigation in August and said it has 40 working days to decide whether to move it to a more heightened stage of scrutiny.

The Washington Post earlier reported the October time frame for release of the new Alexa.

© Thomson Reuters 2024

(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)

Source link

#Amazon #Turns #Anthropics #Claude #Alexa #Revamp

HP AI Companion Will Let You Access GPT-4 Powered Chatbot Locally

HP launched the first generation of its artificial intelligence (AI) PCs in India, the HP EliteBook Ultra and HP OmniBook X. Both devices are powered by Qualcomm’s Snapdragon X series chipsets and feature the dedicated Copilot key, which gives them the Copilot+ PC moniker. The OmniBook X is targeted towards retail consumers whereas the EliteBook Ultra is aimed at enterprise customers. Apart from featuring Microsoft’s OS-based AI features, HP has also innovated the devices with a few first-party AI features.

HP Introduces AI Features in Its AI PCs

Among the HP-provided AI features, the most interesting is an on-device AI chatbot, HP AI Companion, powered by OpenAI’s GPT-4. Packaged as an app, it offers general AI chatbot capabilities where users can ask queries and get responses. It also has an Analyze feature where users can upload documents, PDFs, and text files, and the AI will process them and answer questions about them. The company says the AI chatbot functions entirely offline, so the data never leaves the device.

Additionally, HP is integrating new AI features into the webcam powered by Poly, the company it acquired in November 2022. Users get access to features such as Spotlight that improve the lighting of the person in the frame, background blur, AI-powered filters, and auto framing. Apart from that, Poly Studio also enhances sound quality and reduces noise.

Further, the enterprise-focused devices are also equipped with HP Wolf Pro Security NGAV, an anti-virus software which uses machine learning to learn and update itself about new virus and malware attacks.

Understanding HP’s AI Offerings

HP’s AI offerings are interesting, as it is the only laptop manufacturer that has introduced first-party AI features. However, it also raises a few questions. The HP AI Companion, while a useful tool, will be competing with Microsoft’s Copilot’s chatbot capabilities. The choice of going for the closed model GPT-4 instead of an open-source model such as Mixtral or Llama is also interesting.

Gadgets 360 spoke with Vineet Gehani, Senior Director – Personal Systems category, HP India, to decode these questions and simplify the increasingly competitive AI PC segment. During the interaction, we covered the above-mentioned topics and tried to understand the relevance of AI PCs in present times.

HP AI Companion: A Secure Way to Use AI Chatbots

HP’s AI Companion, powered by GPT-4, is localised entirely within the device. There is an option available to connect it to the server for more complex tasks, but the user has the option whether or not to turn it on. At first glance, this would appear to be a more secure option compared to server-based AI chatbots.

HP AI Companion

 

“Everybody wants to do more, but they want to do it for themselves and not necessarily from a public sharing perspective. That is where the private HP AI Companion helps. That’s where all the security features come in.” Gehani said. He further added, taking in a holistic perspective of data security, “We stand by the ethical use of AI capabilities. We have the right firewalling and the right protection equipped against that.”

But will a first-party local AI create friction with Microsoft? The question is relevant since Microsoft is aggressively pushing to dominate the AI PC space with the dedicated Copilot button, and its Windows OS comes with several Copilot-powered features. However, while Copilot sits on the device, the processing of the queries and generation of responses takes place on the cloud, which might make some users apprehensive about data security and privacy.

However, Gehani does not believe there is any scope for friction. “We see this to be complementary. Microsoft is enriching the AI ecosystem with Copilot features such as Cocreate and Live Captions. HP is offering a layer over that. I don’t see this as competing but as a complementary service.”

The Need for a Dedicated Copilot Key

While announcing the AI PC terminology, Microsoft urged original equipment manufacturers (OEMs) to add a Copilot key to qualify their device as either Copilot PC or Copilot+ PC. However, does the key hold any relevance for the end consumer?

“It gives you AI at your fingertips and makes using the AI that much easier. You can just tap the button to take the app live without having to navigate it via multiple clicks. It gives a better experience and makes it more intuitive, Gehani explained.

The Relevance of AI PCs

Despite AI PCs arriving on the market, the average consumer does not have a lot of use case for it. While asking the chatbot a bunch of questions might be fun for a while, the novelty wears off soon. Microsoft has offered several interesting features, but apps and experiences are still limited due to the lack of third-party apps with AI integration. The question that arises is whether the end consumer should invest in devices that have a powerful neural processing unit (NPU) but not enough use cases for them.

“We look at this from two lenses. For business users, it is about data scientists being able to do more. It is about corporate users getting more real-time inference from their workflows. There are also use cases of doing the work faster, more collaboratively, and multitasking,” Gehani said, highlighting that these features are already available with the current generation of AI PCs.

For the retail consumers, it is an evolving journey, explained the HP India Senior Director. While AI can and will provide newer experiences and features in the future. Currently, a big focus is also on improving the existing features. Giving a few examples, Gehani said generative AI is also assisting in improving hybrid work experiences, making audio and video conferences more intuitive, and providing a better battery life in a portable device. He claims that HP laptops have already made significant advancements in providing these experiences. As far as new experiences are concerned, Gehani remains confident that these will soon arrive as well.

Source link

#Companion #Access #GPT4 #Powered #Chatbot #Locally

OpenAI Working on Project ‘Strawberry’ for ‘Deep Research’ Capabilities

ChatGPT maker OpenAI is working on a novel approach to its artificial intelligence models in a project code-named “Strawberry,” according to a person familiar with the matter and internal documentation reviewed by Reuters.

The project, details of which have not been previously reported, comes as the Microsoft-backed startup races to show that the types of models it offers are capable of delivering advanced reasoning capabilities.

Teams inside OpenAI are working on Strawberry, according to a copy of a recent internal OpenAI document seen by Reuters in May. Reuters could not ascertain the precise date of the document, which details a plan for how OpenAI intends to use Strawberry to perform research. The source described the plan to Reuters as a work in progress. The news agency could not establish how close Strawberry is to being publicly available.

How Strawberry works is a tightly kept secret even within OpenAI, the person said.

The document describes a project that uses Strawberry models with the aim of enabling the company’s AI to not just generate answers to queries but to plan ahead enough to navigate the internet autonomously and reliably to perform what OpenAI terms “deep research,” according to the source.

This is something that has eluded AI models to date, according to interviews with more than a dozen AI researchers.

Asked about Strawberry and the details reported in this story, an OpenAI company spokesperson said in a statement: “We want our AI models to see and understand the world more like we do. Continuous research into new AI capabilities is a common practice in the industry, with a shared belief that these systems will improve in reasoning over time.”

The spokesperson did not directly address questions about Strawberry.

The Strawberry project was formerly known as Q*, which Reuters reported last year was already seen inside the company as a breakthrough.

Two sources described viewing earlier this year what OpenAI staffers told them were Q* demos, capable of answering tricky science and math questions out of reach of today’s commercially-available models.

On Tuesday at an internal all-hands meeting, OpenAI showed a demo of a research project that it claimed had new human-like reasoning skills, according to Bloomberg. An OpenAI spokesperson confirmed the meeting but declined to give details of the contents. Reuters could not determine if the project demonstrated was Strawberry.

OpenAI hopes the innovation will improve its AI models’ reasoning capabilities dramatically, the person familiar with it said, adding that Strawberry involves a specialized way of processing an AI model after it has been pre-trained on very large datasets.

Researchers Reuters interviewed say that reasoning is key to AI achieving human or super-human-level intelligence.

While large language models can already summarize dense texts and compose elegant prose far more quickly than any human, the technology often falls short on common sense problems whose solutions seem intuitive to people, like recognizing logical fallacies and playing tic-tac-toe. When the model encounters these kinds of problems, it often “hallucinates” bogus information.

AI researchers interviewed by Reuters generally agree that reasoning, in the context of AI, involves the formation of a model that enables AI to plan ahead, reflect how the physical world functions, and work through challenging multi-step problems reliably.

Improving reasoning in AI models is seen as the key to unlocking the ability for the models to do everything from making major scientific discoveries to planning and building new software applications.

OpenAI CEO Sam Altman said earlier this year that in AI “the most important areas of progress will be around reasoning ability.”

Other companies like Google, Meta and Microsoft are likewise experimenting with different techniques to improve reasoning in AI models, as are most academic labs that perform AI research. Researchers differ, however, on whether large language models (LLMs) are capable of incorporating ideas and long-term planning into how they do prediction. For instance, one of the pioneers of modern AI, Yann LeCun, who works at Meta, has frequently said that LLMs are not capable of humanlike reasoning.

AI Challenges

Strawberry is a key component of OpenAI’s plan to overcome those challenges, the source familiar with the matter said. The document seen by Reuters described what Strawberry aims to enable, but not how.

In recent months, the company has privately been signaling to developers and other outside parties that it is on the cusp of releasing technology with significantly more advanced reasoning capabilities, according to four people who have heard the company’s pitches. They declined to be identified because they are not authorized to speak about private matters.

Strawberry includes a specialized way of what is known as “post-training” OpenAI’s generative AI models, or adapting the base models to hone their performance in specific ways after they have already been “trained” on reams of generalized data, one of the sources said.

The post-training phase of developing a model involves methods like “fine-tuning,” a process used on nearly all language models today that comes in many flavors, such as having humans give feedback to the model based on its responses and feeding it examples of good and bad answers.

Strawberry has similarities to a method developed at Stanford in 2022 called “Self-Taught Reasoner” or “STaR”, one of the sources with knowledge of the matter said. STaR enables AI models to “bootstrap” themselves into higher intelligence levels via iteratively creating their own training data, and in theory could be used to get language models to transcend human-level intelligence, one of its creators, Stanford professor Noah Goodman, told Reuters.

“I think that is both exciting and terrifying…if things keep going in that direction we have some serious things to think about as humans,” Goodman said. Goodman is not affiliated with OpenAI and is not familiar with Strawberry.

Among the capabilities OpenAI is aiming Strawberry at is performing long-horizon tasks (LHT), the document says, referring to complex tasks that require a model to plan ahead and perform a series of actions over an extended period of time, the first source explained.

To do so, OpenAI is creating, training and evaluating the models on what the company calls a “deep-research” dataset, according to the OpenAI internal documentation. Reuters was unable to determine what is in that dataset or how long an extended period would mean.

OpenAI specifically wants its models to use these capabilities to conduct research by browsing the web autonomously with the assistance of a “CUA,” or a computer-using agent, that can take actions based on its findings, according to the document and one of the sources. OpenAI also plans to test its capabilities on doing the work of software and machine learning engineers.

© Thomson Reuters 2024

Source link

#OpenAI #Working #Project #Strawberry #Deep #Research #Capabilities

What is the ‘responsible quantum technologies’ movement? | Explained

The United Nations recently said 2025 will be observed as the International Year of Quantum Science and Technology (IYQ). There are to be many events focusing on quantum science and technology (S&T), including to create awareness of its concepts and explore its benefits for humankind.

The applications of quantum mechanics constitute an emerging technology yet quantum S&T haven’t captured the public attention the way artificial intelligence (AI) or genome editing have. Nonetheless, quantum S&T applications in three domains — quantum computing, quantum sensors, and quantum communications — are in different stages of development worldwide.

What is responsible quantum S&T?

Quantum S&T are part of the ‘S&T plans’ of many governments and the subject of significant private sector investment. According to an estimate computed by consulting firm McKinsey last year, four sectors — automotives, chemicals, financial services, and life sciences — are expected to gain about $1.3 trillion in value by 2035 thanks to quantum S&T. Among investments by countries, China leads with $10 billion in 2022, followed by the European Union and the U.S. India’s contribution is currently $730 million (Rs 6,100 crore).

The value of quantum S&T is in transforming our abilities to transmit and make use of information across sectors. But they also carry the risk of misuse thanks to the technologies’ potential for dual use, like weakening digital security.

Researchers and some governments have thus been calling for practising responsible quantum technologies to harness the value of quantum S&T while engendering public trust. This is why, for example, the U.K.’s ‘National Quantum Strategy’ states, “We will ensure that regulatory frameworks drive responsible innovation and the delivery of benefits for the UK, as well as protecting and growing the economy and the UK’s quantum capabilities.”

What is quantum governance?

The World Economic Forum (WEF) was one of the first organisations to discuss quantum computing governance. Its ‘Quantum Governance’ framework for this is based on the principles of transparency, inclusiveness, accessibility, non-maleficence, equitability, accountability, and the common good. Members of the framework include those from national government agencies, academic institutions, and private sector leaders (including in India).

The WEF’s objective here is to accelerate the development of responsible quantum computing by building trust in the technology during its development to preempt and mitigate potential risks. The framework’s virtue is that it addresses responsible development up front rather than as an afterthought.

IBM, a major global player in quantum computing and a member of WEF’s initiative, has also said that its efforts to develop quantum S&T will focus on making a positive social impact and building a diverse and inclusive quantum community. According to the company, its contracts bar the use of its quantum products in potentially harmful applications and encourage the development of technologies that can protect organisations against the misuse of quantum computers.

Reality isn’t that simple of course. For example, a white paper published in the last week of June by Ernst & Young and the Responsible Technology Institute (RTI) of the University of Oxford cautioned against inflated expectations and overestimating our understanding of ethical issues. In particular, it called out the gaps between countries in terms of quantum S&T capacities and reasoned that lack of access to talent and technologies could widen the gaps further.

From another perspective, a group of academics from the U.S., Canada and Europe recently proposed another framework for responsible quantum technologies. Here, the group has suggested 10 principles to guide the applications of quantum S&T aim together with their RRI values. ‘RRI’ stands for ‘responsible research and information’, a concept and practice endorsed by the European Commission. Many institutions worldwide, including funding agencies, have adopted it; it emphasises ‘anticipation’, ‘reflection’, ‘diversity’, and ‘inclusion’ while foregrounding public engagement and ethical considerations.

What do countries want?

These frameworks and initiatives have emerged largely from among researchers and are united in their focus on and intention to maintain openness. National policies on the other hand have preferred frameworks that confer greater and stronger protections of intellectual property rights vis-à-vis quantum technologies.

For example, the U.S. National Quantum Strategy is clear “the … government must work to safeguard relevant quantum research and development and intellectual property and to protect relevant enabling technologies and materials. Agencies responsible for either promoting or protecting quantum technologies should understand the security implications.”

Similarly, it may be naïve to expect the private sector — with its large investments and desire for patents and profits — will favour sharing and openness in the name of responsible quantum technologies. There may be exceptional circumstances but they won’t be the norm. This is why the Open Quantum Institute, initiated by the Geneva Science and Diplomacy Anticipator and hosted by CERN, is important: it has private sector support and can work on quantum technologies for all, at least to some extent.

What is the impact of policies?

Unfortunately, there aren’t many case studies yet on the impact of policy frameworks that have embedded responsible innovation in quantum S&T. One published by University of Oxford researchers in 2021 pointed to a need for a more granular understanding of ‘responsibilities’ on the U.K. government’s part.

But for these challenges, the fact remains that researchers, private entities, and governments have expressed interest in deliberating the responsible dimension of quantum S&T development. The pursuit of responsible quantum technologies can’t be dismissed as a gimmick.

This is heartening even if how, or whether, their engagement will translate to more meaningful policies and regulations is still unclear.

Krishna Ravi Srinivas is adjunct professor of law, NALSAR University of Law Hyderabad; consultant, RIS, New Delhi; and associate faculty fellow, CeRAI, IIT Madras.

Source link

#responsible #quantum #technologies #movement #Explained

Amazon Could Charge Fee for Unprofitable Alexa Service, Plans AI Revamp

Amazon is planning a major revamp of its decade-old money-losing Alexa service to include a conversational generative AI with two tiers of service and has considered a monthly fee of around $5 to access the superior version, according to people with direct knowledge of the company’s plans.

Known internally as “Banyan,” a reference to the sprawling ficus trees, the project would represent the first major overhaul of the voice assistant since it was introduced in 2014 along with the Echo line of speakers. Amazon has dubbed the new voice assistant “Remarkable Alexa,” the people said.

The sources include eight current and former employees who worked on Alexa and who spoke on the condition of anonymity because they were not authorized to discuss confidential projects.

Amazon has pushed workers towards a deadline of August to prepare the newest version of Alexa, three of the people said, noting that CEO Andy Jassy has taken a personal interest in seeing Alexa reinvigorated. In an April letter to shareholders, Jassy promised a “more intelligent and capable Alexa,” without providing additional details.

The company’s plans for Alexa including pricing and release dates could be altered or canceled depending on the progress of Project Banyan, the people cautioned.

“We have already integrated generative AI into different components of Alexa, and are working hard on implementation at scale—in the over half a billion ambient, Alexa-enabled devices already in homes around the world—to enable even more proactive, personal, and trusted assistance for our customers,” said an Amazon spokeswoman in a statement.

The service — which provides spoken answers to user queries, like the local weather, and can serve as a hub to control home appliances – was a pet project of Amazon founder Jeff Bezos who envisioned a technology that could emulate the fictional voice computer portrayed on television’s Star Trek series.

For Amazon, keeping up with rivals in generative AI is critical as Google, Microsoft and OpenAI have garnered more favorable attention for their so-called chatbots that can respond almost instantaneously with full sentences to complicated prompts or queries.

The release of ChatGPT in late 2022 set off a frenzy of investing in AI firms and has pushed chipmaker Nvidia past Amazon and others by market capitalization, briefly becoming the world’s second-most valuable company.

Apple too is pushing ahead with its own AI strategy, including updating its Siri voice activated software embedded in iPhones to include more conversational answers.

Some of the Amazon employees who have worked on the project say Banyan represents a “desperate attempt” to revitalize the service, which has never turned a profit, and was caught flatfooted amid the rise of competitive generative AI products over the past 18 months. Those people said they have been told by senior management that this year is a critical one for the service to finally demonstrate it can generate meaningful sales for Amazon.

Accessed primarily through Amazon TVs and Echo speaker devices, Alexa is popular mostly for setting timers, quickly accessing the weather, playing songs or answering simple questions. Amazon’s hopes for goosing sales in its e-commerce operation through the service have fallen flat, mostly because users like to first see the products they are buying for easy comparison.

The Seattle retailer cut thousands of jobs in the unit in late 2023, part of a major restructuring after a pandemic-fueled e-commerce surge lost steam.

‘MUST WIN’

With an embedded AI, Amazon expects Alexa customers will ask it for shopping advice like which gloves and hat to purchase for a mountain climbing trip, the people said, similar to a text-based service on its website known as Rufus that Amazon rolled out earlier this year.

Some said they’ve been told by senior management that 2024 represents a “must win” year for Alexa, which along with the Prime membership and Kindle and Fire devices are the brands most closely associated with Amazon.

But an AI-powered version of the service demonstrated in September has yet to be released to the broader public while competitors have pushed out multiple updates to their chatbots. In the demonstration, Alexa lost its robotic tone and answered questions like the start time for a football game. “You can now have a near-human-like conversations with Alexa,” promised Dave Limp, Amazon’s hardware chief at the time, who has since left the company.

Amazon is working to replace what it refers to internally as “Classic Alexa,” the current free version, with an AI-powered one and yet another tier that uses more powerful AI software for more complicated queries and prompts that people would have to pay at least $5 per month to access, some of the people said. Amazon has also considered a roughly $10-per-month price, they said.

There is no tie-in with Amazon’s $139-per-year Prime membership being considered, the people said.

As envisioned, the paid version could perform more intricate tasks such as composing a brief email, sending it and ordering dinner for delivery from Uber Eats, all from a single prompt, some of the people said. It could also eliminate the need to repeatedly say “Alexa” during a conversation with the software and offer more personalization, they said.

But the people said they struggled to see why customers would be willing to pay for a service, even a revamped one, that is offered for free today.

Amazon has also been plagued by false starts in developing the AI and other challenges such as hallucinations – when software produces false or misleading information – and poor employee morale in the division.

Some of Amazon’s plans for the service were previously reported by Business Insider, including its struggles with the performance of the underlying AI and its hopes for a paid service, however Reuters is first to report the tiered pricing, internal deadline and potential monthly fee.

Amazon is also aiming to supercharge the home automation offered through Alexa, the people said. Alexa now can wirelessly connect to so-called smart devices so that they can be controlled by voice, allowing a user to, for example, turn the porch lights on every day at 8 pm.

But Remarkable Alexa could learn from users so that it powers on the television for a favorite weekly program or turns on a user’s coffee pot after a morning alarm goes off, which is possible today through prompts that Amazon calls Routines.

Some of the people noted that for such a service to work properly it will require customers to buy additional Alexa-enabled devices.

The company had been working on devices last year to get the service into more rooms of the house, such as Alexa-enabled home energy consumption trackers and a carbon monoxide detector, people familiar with the matter previously told Reuters.

© Thomson Reuters 2024


(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)

Affiliate links may be automatically generated – see our ethics statement for details.

Source link

#Amazon #Charge #Fee #Unprofitable #Alexa #Service #Plans #Revamp

Analysis: In the age of AI, keep calm and vote on

This article is part of a series, Bots and ballots: How artificial intelligence is reshaping elections worldwide, presented by Luminate.

When I started this series on artificial intelligence, disinformation and global elections, I had a pretty clear picture in mind.

It came down to this: While AI had garnered people’s imagination — and the likes of deepfakes and other AI-generated falsehoods were starting to bubble to the surface — the technology did not yet represent a step change in how politically motivated lies, often spread via social media, would alter the mega-election cycle engulfing the world in 2024.

Now, after nine stories and reporting trips from Chișinău to Seattle, I haven’t seen anything that would alter that initial view. But things, as always, are more complicated — and more volatile — than I first believed.

What’s clear, based on more than 100  interviews with policymakers, government officials, tech executives and civil society groups, is that the technology — specifically, generative AI — is getting more advanced by the day.

During the course of my reporting, I was shown deepfake videos, purportedly portraying global leaders like U.S. President Joe Biden and his French counterpart Emmanuel Macron, that were indistinguishable from the real thing. They included politicians allegedly speaking in multiple languages and saying things that, if true, would have ended their careers.

They were so lifelike that it would take a lot to convince anyone without deep technical expertise that an algorithm had created them.

Despite being a tech reporter, I’m not a fanboy of technology. But the speed of AI advancements, and their ease of use by those with little, if any, computer science background, should give us all pause for concern.

The second key theme that surprised me from this series was how much oversight had been outsourced to companies — many of which were the same firms that created the AI systems that could be used for harm.

More than 25 tech giants have now signed up to the so-called AI Election Accords, voluntary commitments from companies including Microsoft, ByteDance and Alphabet to do what they can to protect global elections from the threat posed by AI.

Given the track record of many of these firms in protecting users from existing harms, including harassment and bullying on social media, it’s a massive leap of faith to rely on them to safeguard election integrity.

That’s despite the legitimate goodwill I perceived from multiple interviews with corporate executives within these firms to reduce politically motivated harm as much as possible.

The problem, as of mid-2024, is that governments, regulators and other branches of the state are just not prepared for the potential threat — and it does remain potential — tied to AI.

Much of the technical expertise resides deep within companies. Legislative efforts, including the European Union’s recently passed Artificial Intelligence Act, are, at best, works in progress. The near total lack of oversight of how social media platforms’ AI-powered algorithms operate makes it impossible to rely on anyone other than tech giants themselves to police how these systems determine what people see online.

With AI advancing faster than you can say “large language model” and governments struggling to keep up, why am I still cautious about heralding this as the year of AI-fueled disinformation, just as billions of people head to the polls in 2024?

For now, I have a potentially naive belief that people are smarter than many of us think they are.

As easy as it is to think that one well-placed AI deepfake on social media may change the minds of unsuspecting voters, that’s not how people make their political choices. Entrenched views on specific lawmakers or parties make it difficult to shift people’s opinions. The fact that AI-fueled forgeries must be viewed in a wider context — alongside other social media posts, discussions with family members and interactions with legacy media — also hamstring the ability for such lies to break through.

Where I believe we’re heading, though, is a “post-post-truth” era, where people will think everything, and I mean everything, is made up, especially online. Think “fake news,” but turned up to 11, where not even the most seemingly authentic content can be presumed to be 100 percent true.

We’re already seeing examples of politicians claiming that damaging social media posts are deepfakes when, in fact, they are legitimate. With the hysteria around AI often outpacing what the technology can currently do — despite daily advances — there’s now a widespread willingness to believe all content can be created via AI, even when it can’t. 

In such a world, it’s only rational to not have faith in anything.

The positive is that we’re not there yet. If the nine articles in this “Bots and Ballots” series show anything, it’s that, yes, AI-fueled disinformation is upon us. But no, it’s not an existential threat, and it must be viewed as part of a wider world of ‘old-school’ campaigning and, in some cases, foreign interference and cyberattacks. AI is an agnostic tool, to be wielded for good or ill.

Will that change in the years to come? Potentially. But for this year’s election cycle, your best bet is to remain vigilant, without getting caught up in the hype-train that artificial intelligence has become.

Mark Scott is POLITICO’s chief technology correspondent. He writes a weekly newsletter, Digital Bridge, about the global intersection of technology and politics. 

This article is part of a series, Bots and ballots: How artificial intelligence is reshaping elections worldwide, presented by Luminate. The article is produced with full editorial independence by POLITICO reporters and editors. Learn more about editorial content presented by outside advertisers.



Source link

#Analysis #age #calm #vote