Yearender 2023 | 5 big tests for global diplomacy

Let’s start with this week, and the end of the CoP 28, Climate Change summit held in Dubai, ended with a final document called the UAE consensus that agreed to a number of actions

The big takeaways: 

  1. Transition away from fossil fuel- oil, coal and gas in energy production, but no phase-out 
    Tripling of renewables by 2030 
  2. Methane: Accelerating and substantially reducing non-carbon-dioxide emissions globally, including in particular methane emissions by 2030 
  3. NetZero by 2050- this is meant to push India that has put 2070 as its netzero date, and China by 2060, to earlier dates 
  4. Loss and Damage fund adopted with about $750 million committed by Developed countries- most notably UAE, France, Germany, and Italy towards the fund set up during CoP28  

However, critics described the final document as “weak tea”, “watered down” and a “litany of loopholes”, and some criticised the UAE COP president directly for not ensuring stronger language against fossil fuels 

Where is the world ? 

1. Of the P-5- Leaders of US and China skipped the summit, Russian President Putin flew into Abu Dhabi with much fanfare, but didn’t go to CoP, and signed a number of energy deals. Leaders of UK and France attended CoP28

2. Small Island States and Climate vulnerable countries that bear the brunt of global warming were the most critical

Where is India? 

  1. India spoke essentially for the developing world, that does not want to commit to ending fossil fuel use that would slow its growth- and pushed for terms like phase-out and coal-powered plants to be cut out of the text.
  2. India has some pride in the fact that it has exceeded goals for its NDCs, and now is updating them- but is making it clear that it isn’t part of the global problem- contributing very little to emissions, and it won’t be pushed into being the solution 
  3. India is not prepared to bring forward targets for Net Zero or for ending coal use 
  4. PM Modi has now pitched to host CoP33 in 2028 

Let’s turn to the 2nd and 3rd big challenges to global diplomacy- and they came from conflict. 

2. Russian war in Ukraine:

The war in Ukraine is heading to its 2 year mark 

  • In a 4-hour long Press Conference this week, Russian President Vladimir Putin made it clear the war in Ukraine will not end until Russian goals are met- of demilitarization and “denazification” of Ukraine- certainly looking more confident about the way the war is moving 
  • The OHCHR estimates civilian casualties in Ukraine since February 2022, including in territory now controlled by Ukraine, and Russia is more than 40,000, with conflicting figures that total 500,000 military casualties- which are contested 
  • As aid begins to dwindle to its lowest point since February 2022 Ukrainian President Volodymyr Zelensky has been travelling to the US, trying to raise support for more funds and arms 

How is the world faring? 

  1. The UN Security Council is frozen over the issue, with Russia vetoing any resolutions against it. 
  2. On the One Year anniversary of the Russian invasion the UNGA passed a resolution calling on Russia to “leave Ukraine”- 141 countries in favour, 32 abstentions including India, and 7 against 
  3. In March 2023, the International Criminal Court issued an arrest warrant for President Putin, however, no country Mr. Putin has visited, including China, Central Asia, UAE, Saudi Arabia etc have enforced it 
  4. After a near breakdown in talks at the G20 in Delhi, India was able to forge a consensus document that brought the world together for a brief moment- the document didn’t criticize Russia but called for peace in Ukraine, something Kiev said it was disappointed by 

India: 

  1. India has continued to abstain at the UN, no criticism of Russia, and continued to buy increasing amounts of Russia oil that have increased a whopping 2,200% since the war began 
  2. India has also continued its weapons imports from Russia, although many shipments have been delayed due to Russian production and the payment mechanism problem 
  3. However, India has clearly reduced its engagement with Moscow- PM Modi will be skipping the annual India-Russia for the 2nd year now, and India dropped plans to host the SCO summit in person, making it virtual instead 

3. October 7 attacks and Israel Bombing of Gaza  

2023 is now known as the year of 2 conflicts- with many questioning whether the US can continue to funding its allies on both. 

-The current turn of the conflict began on October 7, as the Hamas group carried out a number of coordinated terror strikes in Israeli settlements along the border with Gaza- brutally killing 1,200, taking 240 hostages, with allegations of beheading and rape against the Hamas terrorists. 

– Israel’s retaliation, pounding Gaza residents for more than 2 months in an effort to finish Hamas and rescue the hostages has been devastating- with 29,000 munitions dropped, more than 18,000 killed, more than 7,000 of them children and as every kind of infrastructure in North and South is being flattened, more than 1.8 million people, 80% of the population is homeless 

Where is the world? 

– The UNSC is again paralysed, with the US vetoing every resolution against Israel 

– The UNGA has passed 2 resolutions with overwhelming support in October 120 countries, or 2/3rds present voted in favour of a ceasefire, in December 153 countries, 4/5ths of those present voted in favour, with severe criticism of Israel’s actions 

– Several countries have withdrawn their diplomats from Tel Aviv, but Arab states who have held several conferences have not so far cut off their ties with Israel 

– Netanyahu has rejected the UN calls, said the bombing wont stop until Hamas is eliminated 

– The global south has voted almost as a bloc, criticizing Israel for its disproportionate response and indiscriminate bombing 

Where is India? 

  1. When the October 7 attacks took place, India seemed to change its stance, issuing strong statements on terrorism, calling for a zero tolerance approach. In UNGA vote in September ,India abstained, a major shift from its past policy 
  2. However, as the death toll from Israel’s bombardment has risen, and the global mood has shifted, India moved closer to its original position, expressing concern for Palestinian victims and sending aid, and then this week, voting for the UNGA resolution, which marked the first time India has called for a ceasefire. 
  3. The shifts and hedging in position has left India without a leadership role in the conflict, away from both the global south and South Asia itself 

4. Afghanistan – Taliban and Women 

  • This is an area where the world has scored a big F for failure. 2 and a half years after the Taliban took over Kabul, there is little hope for loosening its grip on the country. 
  • The interim government of the Taliban, which includes many members on the UN terrorist lists remains in place, and no women with no talks about an inclusive or democratic, more representative government taking place 
    With the economy in shambles, sanctions in place and aid depleted, 15 million Afghans face acute food insecurity, and nearly 3 million people face severe malnourishment or starvation. An earthquake this year compounded problems Adding to the misery, 500,000 Afghan refugees have been sent back from Pakistan, and they lack food clothing or shelter. 
  • Girls are not allowed to go to school in most parts of the country, female students can’t pursue higher studies, and women are not allowed to hold most jobs, or use public places, parks, gyms etc 
  • While the UN doesn’t recognize the Taliban, nearly 20 countries, including India now run embassies in Kabul, and most countries treat the Taliban as the official regime 
  • No country today supports or gives more than lip service to the armed resistance or even democratic exiles in different parts of the world 

Where is India? 

  • India has reopened its mission in Kabul and as of last month, the Embassy of the old democratic regime in Delhi was forced to shut down due to lack of funds and staff- it has now been reopened by Afghan consuls in Mumbai and Hyderabad, who engage the Taliban regime, although they still bear the old democratic regime’s flag. 
  • India has sent food and material aid to Afghanistan- first through Pakistan, and then via Chabahar, and Indian officials regularly engage the Taliban leadership in Kabul 
  • Unlike its policy from 1996-01 towards the Taliban, India has not taken any Afghan refugees, rejected visas for students, businesspersons and even spouses of Indian citizens 
  • India does not support the armed resistance or any democratic exiles, and is not taking a leadership role on the crisis, yielding space to China and Russia instead 

5. Artificial Intelligence 

Finally to the global diplomacy challenge the world is just waking up to- AI 

  • For the past few decades, military powers have been developing AI to use in robotic warfare and more and more sophisticated drone technology as well as other areas
  •  Industry has also worked for long on different AI applications in machine intelligence from communication, r&d, to machine manufacture and 
  • However, the use of AI in information warfare has now become a cause for concerns about everything from job losses to cyber-attacks and the control that humans actually have over the systems and the world is looking for ways to find common ground on regulating it 
  • Last month the UK hosted the first Global AI summit- with PM Rishi Sunak bringing in US VP Harris, EU Chief Von Der Leyen and UNSG chief Guterres and others to look at ways –countries agreed on an AI panel resembling the Inter
  • Governmental Panel on Climate Change to chart the course for the world 
  • India hosted this year’s version of the Global Partnership on AI session in Delhi this month, comprising 28 countries and EU that look at “trustworthy development, deployment, and use of AI” – also at the Modi-Biden meeting in Washington this year, India and the US have embarked upon a whole new tech partnership 

Clearly the AI problem and its potential is a work in progress, and we hope to do a full show on geopolitical developments in AI when we return with WorldView next year. 

WV Take: What’s WV take on the year gone by? Simply put, this has been a year that has seen global consensus and global action weaker than ever before. As anti-globalisation forces turn countries more protectionist and anti-immigration, as less countries are willing to follow the international rule of law, humanitarian principles, the entire system of global governance has gone into decline. India’s path into such a future is three fold- to strengthen the global commons as much as possible, to seek global consensus on futuristic challenges and to understand the necessity for smaller, regional groupings for both security and prosperity alternatives. 

WV Yearender Reading recommendations: 

  1. India’s Moment: Changing Power Equations around The World by Mohan Kumar, former diplomat, now an academic and economic expert- this is an easy read that will make a lot of sense 
  2. Unequal: Why India Lags Behind its Neighbours- by Swati Narayan. This is a startling work of research, with a compelling argument on the need to pay more attention to Human Development Indices 
  3. India’s National Security Challenges: Edited by NN Vohra, with some superb essays on the need for a national security policy and defence reforms 
  4. The Age of AI: And Our Human Future by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher 
    Conflict: A Military History of the Evolution of Warfare from 1945 to Ukraine by Andrew Roberts and Retd Gen David Petraeus 
  5.  The Power of Geography: Ten Maps that Reveal the Future of Our World by Tim Marshall and Future of Geography : How Power and Politics in Space will Change Our World

Script and Presentation: Suhasini Haidar

Production: Kanishkaa Balachandran & Gayatri Menon

Source link

#Yearender #big #tests #global #diplomacy

In 2024 elections, we have to act against AI-aggravated bias

The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

Journalists must give a voice to the underrepresented and underprivileged communities at the receiving end of much of the misinformation that drives polarising narratives and undermines trust in democracy itself, Meera Selva writes.

ADVERTISEMENT

2024 is going to be the year of elections driven by AI-boosted campaigning, global conflict, and ever more pervasive AI tools.

Some 2 billion people will go to the polls in 65 elections to select leaders who will have campaigned, communicated, and fundraised online, and who know their terms in office will be defined by the digital space.

Voting will happen in some of the most densely populated countries in the world, where media has been upended by digital communications, including Indonesia, India, and Mexico. 

And these elections will be among the first to take place after the sudden popularisation of generative AI technologies — casting further uncertainty on how they will play out. 

There is an argument that fears of AI are overblown, and most people will not have their behaviour altered by exposure to AI-generated misinformation. 2024 will offer some evidence as to whether or not that’s true.

Small groups will play big roles. Elections are now often so closely contested that the final results can be turned by proportionately very few voters. 

Mistrust or hostility towards one small group can end up defining the whole national debate. Communities of colour and immigrant communities can be affected disproportionately by misinformation in election times, by both conspiracy theories undermining their trust in the process, and incorrect information on how to vote.

That is why the needs and voices of minority communities must be foregrounded in these elections. Whether AI tools will help or hinder that is still an open question.

No editorial checks will make things worse

Some of the biggest dangers widely accessible AI technologies will pose in global elections stem from a lack of diversity in design and leadership.

There is already a trend for misinformation to spread via mistranslations — words that have different, often more negative connotations when translated from one language, usually English, to another. 

This will only worsen with AI-powered translations done at speed without editorial checks or oversight from native language speakers.

Some AI tools also play on existing prejudices against minorities: in Slovakia’s elections this autumn, an alleged audio recording of one candidate telling a journalist about a plan to buy votes from the Roma minority, who are structurally discriminated against and often viewed with hostility, spread fast on Facebook. 

The truth that the recording had been altered came too late: the candidate in question, Michal Simecka, lost to former Prime Minister Robert Fico, who returned to power after having resigned in 2018 following outrage over the murder of an investigative journalist.

Using tech to keep discriminating against others

In India, there are fears that popular AI tools are entrenching existing discrimination on lines of caste, religion and ethnicity. 

During communal riots in Delhi in 2020, police used AI-powered facial recognition technology to arrest rioters. Critics point out the technology is more likely to be used against Muslims, indigenous communities, and those from the Dalit caste as the country’s elections draw near.

These fears are backed up by research from Queens University in Belfast, which showed other ways that the use of AI in election processes can harm minorities. 

If the technology is used for administering mailing lists or deciding where polling stations should be located, there is a real risk that this will result in minority groups being ignored or badly served.

Many of the problems of diversity in AI-generated content come from the data sets the technology is trained on, but the demographics of AI teams are also a factor. 

ADVERTISEMENT

A McKinsey report on the state of AI in 2022 shows that women are significantly underrepresented, and a shocking 29% of respondents said they have no minority employees working on their AI solutions. 

As AI researcher Dr Sasha Luccioni recently pointed out, women are even excluded from the way AI is reported on.

There are benefits to AI, too

It’s clear AI will play a significant role in next year’s elections. Much of it will be beneficial: it can be used to power chatbots to engage citizens in political processes and can help candidates understand messages from the campaign trail more easily.

I see this first-hand in my daily work: Internews partners up with local, independent media outlets around the world that are creatively using AI tools to improve the public’s access to good information. 

In Zimbabwe, the Center for Innovation and Technology is using an AI-generated avatar as a real-time newsreader, which can have its speech tailored to local accents and dialects, reaching communities that are rarely represented in newsrooms. 

ADVERTISEMENT

And elsewhere in Africa, newsrooms are using AI tools to detect bias and discrimination in their stories.

The same AI tools will almost certainly be used by malicious actors to generate deep fakes, fuel misinformation, and distort public debate at warp speed. 

The Philippines, for example, has had its political discourse upended by social media, to the extent that its most famous editor, the Pulitzer Prize-winning Maria Ressa, warned that the Philippines is the canary in a coal mine on the interface of technology, communications, and democracy; anything that happens there will happen in the rest of the world within a few years. 

There is pushback however and Filipino society is taking action — ahead of next year’s elections, media organizations and civil society have come together to create ethical AI frameworks as a starting point for how journalists can use this new technology responsibly.

Giving voice to those on the receiving end remains vital

But these kinds of initiatives are only part of the solution. Journalism alone cannot solve the problems posed by generative and program AI in elections, in the same way, it cannot solve the problems of mis and disinformation. 

ADVERTISEMENT

This is an issue regulators, technology companies, and electoral commissions must work on alongside civil society groups — but that alone also won’t suffice. 

It is vital that journalists give a voice to the underrepresented and underprivileged communities at the receiving end of much of the misinformation that drives polarising narratives and undermines trust in elections, and ultimately in democracy itself.

We didn’t pay enough attention to underserved communities and minority groups when social media first upended electoral processes worldwide, contributing to the democratic backsliding and division we see today. Let us not make the same mistake twice.

Meera Selva is the Europe CEO of Internews, a global nonprofit supporting independent media in 100+ countries.

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

ADVERTISEMENT

Source link

#elections #act #AIaggravated #bias

Israel’s appetite for high-tech weapons highlights a Biden policy gap

Within hours of the Hamas attack on Israel last month, a Silicon Valley drone company called Skydio began receiving emails from the Israeli military. The requests were for the company’s short-range reconnaissance drones — small flying vehicles used by the U.S. Army to navigate obstacles autonomously and produce 3D scans of complex structures like buildings.

The company said yes. In the three weeks since the attack, Skydio has sent more than 100 drones to the Israeli Defense Forces, with more to come, according to Mark Valentine, the Skydio executive in charge of government contracts.

Skydio isn’t the only American tech company fielding orders. Israel’s ferocious campaign to eliminate Hamas from the Gaza Strip is creating new demand for cutting-edge defense technology — often supplied directly by newer, smaller manufacturers, outside the traditional nation-to-nation negotiations for military supplies.

Already, Israel is using self-piloting drones from Shield AI for close-quarters indoor combat and has reportedly requested 200 Switchblade 600 kamikaze drones from another U.S. company, according to DefenseScoop. Jon Gruen, CEO of Fortem Technologies, which supplied Ukrainian forces with radar and autonomous anti-drone aircraft, said he was having “early-stage conversations” with Israelis about whether the company’s AI systems could work in the dense, urban environments in Gaza.

This surge of interest echoes the one driven by the even larger conflict in Ukraine, which has been a proving ground for new AI-powered defense technology — much of it ordered by the Ukrainian government directly from U.S. tech companies.

AI ethicists have raised concerns about the Israeli military’s use of AI-driven technologies to target Palestinians, pointing to reports that the army used AI to strike more than 11,000 targets in Gaza since Hamas militants launched a deadly assault on Israel on Oct 7.

The Israeli defense ministry did not elaborate in response to questions about its use of AI.

These sophisticated platforms also pose a new challenge for the Biden administration. On Nov. 13, the U.S. began implementing a new foreign policy to govern the responsible military use of such technologies. The policy, first unveiled in the Hague in February and endorsed by 45 other countries, is an effort to keep the military use of AI and autonomous systems within the international law of war.

But neither Israel nor Ukraine are signatories, leaving a growing hole in the young effort to keep high-tech weapons operating within agreed-upon lines.

Asked about Israel’s compliance with the U.S.-led declaration on military AI, a spokesperson for the State Department said “it is too early” to draw conclusions about why some countries have not endorsed the document, or to suggest that non-endorsing countries disagree with the declaration or will not adhere to its principles.

Mark Cancian, a senior adviser with the CSIS International Security Program, said in an interview that “it’s very difficult” to coordinate international agreement between nations on the military use of AI for two reasons: “One is that the technology is evolving so quickly that the description constraints you put on it today may no longer may not be relevant five years from now because the technology will be so different. The other thing is that so much of this technology is civilian, that it’s hard to restrict military development without also affecting civilian development.”

In Gaza, drones are being largely used for surveillance, scouting locations and looking for militants without risking soldiers’ lives, according to Israeli and U.S. military technology developers and observers interviewed for this story.

Israel discloses few specifics of how it uses this technology, and some worry the Israeli military is using unreliable AI recommendation systems to identify targets for lethal operations.

Ukrainian forces have used experimental AI systems to identify Russian soldiers, weapons and unit positions from social media and satellite feeds.

Observers say that Israel is a particularly fast-moving theater for new weaponry because it has a technically sophisticated military, large budget, and — crucially — close existing ties to the U.S. tech industry.

“The difference, now maybe more than ever, is the speed at which technology can move and the willingness of suppliers of that technology to deal directly with Israel,” said Arun Seraphin, executive director of the National Defense Industrial Association’s Institute for Emerging Technologies.

Though the weapons trade is subject to scrutiny and regulation, autonomous systems also raise special challenges. Unlike traditional military hardware, buyers are able to reconfigure these smart platforms for their own needs, adding a layer of inscrutability to how these systems are used.

While many of the U.S.-built, AI-enabled drones sent to Israel are not armed and not programmed by the manufacturers to identify specific vehicles or people, these airborne robots are designed to leave room for military customers to run their own custom software, which they often prefer to do, multiple manufacturers told POLITICO.

Shield AI co-founder Brandon Tseng confirmed that users are able to customize the Nova 2 drones that the IDF is using to search for barricaded shooters and civilians in buildings targeted by Hamas fighters.

Matt Mahmoudi, who authored Amnesty International’s May report documenting Israel’s use of facial recognition systems in Palestinian territories, told POLITICO that historically, U.S. technology companies contracting with Israeli defense authorities have had little insight or control over how their products are used by the Israeli government, pointing to several instances of the Israeli military running its own AI software on hardware imported from other countries to closely monitor the movement of Palestinians.

Complicating the issue are the blurred lines between military and non-military technology. In the industry, the term is “dual-use” — a system, like a drone-swarm equipped with computer-vision, that might be used for commercial purposes but could also be deployed in combat.

The Technology Policy Lab at the Center for a New American Security writes that “dual-use technologies are more difficult to regulate at both the national and international levels” and notes that in order for the U.S. to best apply export controls, it “requires complementary commitment from technology-leading allies and partners.”

Exportable military-use AI systems can run the gamut from commercial products to autonomous weapons. Even in cases where AI-enabled systems are explicitly designed as weapons, meaning U.S. authorities are required by law to monitor the transfer of these systems to another country, the State Department only recently adopted policies to monitor civilian harm caused by these weapons, in response to Congressional pressure.

But enforcement is still a question mark: Josh Paul, a former State Department official, wrote that a planned report on the policy’s implementation was canceled because the department wanted to avoid any debate on civilian harm risks in Gaza from U.S. weapons transfers to Israel.

A Skydio spokesperson said the company is currently not aware of any users breaching its code of conduct and would “take appropriate measures” to mitigate the misuse of its drones. A Shield AI spokesperson said the company is confident its products are not being used to violate humanitarian norms in Israel and “would not support” the unethical use of its products.

In response to queries about whether the U.S. government is able to closely monitor high-tech defense platforms sent by smaller companies to Israel or Ukraine, a spokesperson for the U.S. State Department said it was restricted from publicly commenting or confirming the details of commercially licensed defense trade activity.

Some observers point out that the Pentagon derives some benefit from watching new systems tested elsewhere.

“The great value for the United States is we’re getting to field test all this new stuff,” said CSIS’s Cancian — a process that takes much longer in peacetime environments and allows the Pentagon to place its bets on novel technologies with more confidence, he added.



Source link

#Israels #appetite #hightech #weapons #highlights #Biden #policy #gap

UK AI summit: US-led AI pledge threatens to overshadow Bletchley Park

US vice president Kamala Harris spoke about artificial intelligence at the US embassy in London on 1 November

Maja Smiejkowska/Reuters

This week, UK prime minister Rishi Sunak is hosting a group of more than 100 representatives from the worlds of business and politics to discuss the potential and pitfalls of artificial intelligence.

The AI Safety Summit, held at Bletchley Park, UK, began on 1 November and aims to come up with a set of global principles with which to develop and deploy “frontier AI models” – the terminology favoured by Sunak and key figures in the AI industry for powerful models that don’t yet exist, but may be built very soon.

While the Bletchley Park event is the focal point, there is a wider week of fringe events being held in the UK, alongside a raft of UK government announcements on AI. Here are the latest developments.

Participants sign agreement

The key outcome of the first day of the AI Safety Summit yesterday was the Bletchley Declaration, which saw 28 countries and the European Union agree to meet more in the future to discuss the risks of AI. The UK government was keen to tout the agreement as a massive success, while impartial observers were more muted about the scale of its achievement.

While the politicians on stage wanted to highlight the successes, a good proportion of those who were at the summit felt more needed to be done. At 4pm yesterday, just before the closing plenary rounding up of the conclusions of the first day’s panels was due to begin, nearly a dozen civil society groups present at the conference released a communique of their own.

The letter urged those in attendance to consider a broader range of risks to humanity beyond the fear that AI might become sentient or be misused by terrorists or criminals. “The call for regulation comes from those who believe AI’s harm to democracy, civil rights, safety and consumer rights is urgent now, not in a distant future,” says Marietje Schaake at Stanford University in California, who was one of the signatories. Schaake was also keen to point out that the discussion “process should be independent and not an opportunity for capture by companies”.

US flexes muscles further

While attention has been devoted to Bletchley Park, a good proportion of the headway made on AI has been taking place outside the conference – and we aren’t just saying that because reporters attending are locked in a media room, and only allowed out if they have a prearranged interview.

One case in point: at the US Embassy in London on 1 November, US vice president Kamala Harris unveiled a package of actions on AI that includes a political declaration signed by 30 other countries – notably more than those who signed up to the Bletchley Declaration trumpeted by the UK.

Harris carefully chose her words in her speech, saying that the US package would focus on the “full spectrum” of risks from AI. “Let us be clear, there are additional threats that also demand our action. Threats that are currently causing harm, and which to many people also feel existential,” Harris said in her speech – which could be taken as a suggestion the UK’s focus on AI gaining sentience was too myopic.

Four in 10 people say AI is moving too fast

As politicians and experts try to thrash out some form of agreement to conclude the summit, the public began to have their own say – in the form of survey data and public polling that was released to coincide with the summit.

Four in 10 people in the UK surveyed by polling company Survation believe that AI is being developed and unleashed at an unsafe pace. Respondents largely supported slowing down how the technology is rolled out to the public to prioritise safety, with 71 per cent in favour, while just 17 per cent say that the current pace of development of is safe.

The polling also highlighted the challenges of making the public aware of what AI is and how it works (we have a definition, and some guidance, that you can read here). Of those surveyed, 41 per cent admitted they don’t know much about AI – or don’t know anything about it at all. Speaking of which: Elon Musk took his time on the sidelines of the conference to warn that AI will outsmart humans.

Who is attending the AI summit at Bletchley Park and why do they matter?

Yoshua Bengio, a computer scientist professor at the University of Montreal, Canada, who is often called one of the “godfathers of AI” alongside Geoffrey Hinton and Yann LeCun (see below). Unlike Hinton, who used to work for Google, and LeCun, who still does work for Meta, Bengio has tended to steer clear of big tech’s grasp.

Elon Musk runs his own AI company, xAI, as well as owning the social media platform X. He is set to play a pivotal role in this summit – not least because he has got the ear of Sunak, who will be appearing in a livestreamed conversation on X on 2 November. That appears to be a quid pro quo for Musk being a major guest at social events the UK government is planning around the conference.

Nick Clegg was once deputy prime minister of the United Kingdom, but has since become a senior figure at Meta, the company formerly known as Facebook. He will be offering a twinned perspective at the summit from his time in politics and his new employment in tech.

Michelle Donelan is the UK’s technology secretary and her pre-politics career involved working in public relations for World Wrestling Entertainment. Donelan has said she doesn’t use ChatGPT and has made no bones of disagreeing with Musk, but has been praised for quietly meeting targets in her department.

Sam Altman is the CEO of OpenAI, the developer of ChatGPT and AI image generator DALL-E. Altman is a mercurial figure with a reputation for being something of a prepper (someone who worries about the end of the world). As early as 2016, he had drawn up plans to escape to a remote island owned by billionaire tech entrepreneur Peter Thiel in the event of a pandemic. It is believed he never made it due to border closures when the covid-19 pandemic arrived. Altman is perhaps the most powerful man in AI at present, thanks to ChatGPT’s central role in the generative AI revolution.

Ursula von der Leyen is president of the European Commission and was a welcome confirmed attendee after some uncertainty about whether she would turn up to Bletchley Park. Von der Leyen’s presence is likely to further her goal of developing a supranational group like the Intergovernmental Panel on Climate Change to focus on regulating AI across borders.

Yann LeCun is chief AI scientist at Meta and a professor at New York University. He is a proponent of open-source development in AI, which brings him into conflict with some of those at the summit who, on 1 November, said open-source development was too risky for AI. Today, he has praised the UK AI Safety Institute, which he hopes will bring hard data “to a field currently rife with wild speculations and methodologically dubious studies”.

Coming next

The final session of yesterday’s discussions felt oddly like the closing of the entire event. Donelan, the UK’s technology secretary, waxed lyrical about how the ink was still wet on a new page of history, among other images. But there is still a whole other day of discussions.

Today, Sunak wades in, convening a small group of governments, companies and experts “to further the discussion on what steps can be taken to address the risks in emerging AI technology and ensure it is used as a force for good”, at the same time as Donelan converses with her counterparts internationally to agree on next steps.

Once the summit is over, the prime minister will take part in a 45-minute conversation with Musk, which is likely to provide some fireworks. However, in an unusual step, the conversation will be streamed on X, Musk’s social network – but not live. The UK government has assured reporters nothing will be edited before transmission.

Topics:



Source link

#summit #USled #pledge #threatens #overshadow #Bletchley #Park

Follow the Smart Money; Technology and Homebuilder Stocks Loved Last Week’s Reversal in Bond Yields

The fear on Wall Street is rising to a fever pitch, as put option buyers recently accelerated their bets against the market while sentiment surveys reached levels of bearishness not seen since last October. As I’ve noted recently, fear is often the prelude to a tradable bounce. When fear runs high, it pays to follow the smart money, which is starting to flow back into stocks.

Fear is Reaching Extreme Levels

With so much fear among investors, stocks have now entered a familiar type of uncomfortable period; specifically, the type where even though the market is oversold, investors continue to fret and sell stocks in panic, as worries of higher interest rates continue to rise. The CBOE Put/Call ratio reading of 1.60 on 10/4/23 and the recent reading of 17 on the CNN Greed-Fear index are both bullish from a contrarian standpoint.

Of course, oversold markets can stay oversold for longer than anyone expects. Yet as long as the market does not make new lows, the odds of a tradable bottom building continue to rise. On the other hand, there is a light at the end of the proverbial tunnel, and that light is not an oncoming train. A sustained top and a subsequent retracement in bond yields will likely trigger a rebound in stocks.

Here’s the laundry list of worries:

  • The Fed continues to push for higher interest rates;
  • The market’s breadth has broken down; and
  • Bond yields remain near multi-year highs.

Yet that may all change rather quickly, as the market’s breadth is showing signs of recovery and bond yields are looking a bit top-heavy. Moreover, it looks as if bargain hunters are moving into two key areas of the market.  

Smart Money Sneaks into Tech Stocks

It wasn’t long ago that Wall Street realized that AI stocks had risen too far too fast, and we saw a breakdown in the entire technology sector. Yet, money is quietly moving back into many of the same stocks that broke down when the so-called “AI bubble” burst in August.

The Invesco QQQ Trust (QQQ) is heavily weighted toward a handful of large-cap tech stocks, including Microsoft (MSFT) and Alphabet (GOOGL). And while it’s still early in what could be a bumpy recovery for the market, given the Fed’s continuing talk of “higher for longer” interest rates, QQQ, which often bottoms out before the rest of the market, may have already made its lows for the current pullback. At this point, the $350 area seems to be decent support, while $370 is the key short-term resistance level. Accumulation/Distribution (ADI) and On Balance Volume (OBV) are both improving as short sellers leave (ADI) and buyers start moving in (OBV).

A perfect example of the quiet flow of smart money can be seen in shares of Alphabet, which has remained in an uptrend throughout the recent market decline and is now within reach of breaking out.

Bond Yields Are Now Totally Crazy

Much to the chagrin of regular readers, I remain fixated on the action in the bond market. That’s because, if you haven’t noticed, stocks are trading in a direct inverse lock step to bond yields. In other words, rising bond yields lead to falling stock prices and vice-versa. You can thank the robot trader farms for that.

Recently, I’ve noted the U.S. Ten Year Treasury Note (TNX) yield has been trading well above its normal trading range. Specifically, TNX has been above the upper Bollinger Band corresponding to its 200-day moving average since August 11, 2022, except for a small dip back inside the band. As I noted in my recent video on Bollinger Bands, this is a very abnormal trading pattern, which usually precedes a meaningful reversal.

Indeed, something may be happening, and we may be in the early stages of the reversal I’ve been expecting. On 10/6/23, we saw an intraday downturn in TNX after what was initially seen as a bearish jobs report delivered an early rise in yields which took TNX to 4.9%.

The above chart shows that bond yields reached a greater extreme reading recently, as TNX closed three standard deviations above its 200-day moving average on 10/2/23 and 10/6/23 (red line at top of chart), expanding the distortion in the market and likely raising the odds of bond yields reversing their recent climb. Rising bond yields have led to rising mortgage rates and weakness the homebuilder stocks, which as I recently noted to subscribers of JoeDuarteInTheMoneyOptions.com and members of my Buy Me a Coffee page here, may be poised for a rebound.

As the chart below shows, rates (MORTGAGE) have skyrocketed in what looks to be an unsustainable move.

Such a move would be expected to trip a major selloff in the homebuilder stocks. But what we saw was the opposite, as the SPDR S&P Homebuilders ETF (XHB) is starting to put in a bottom as bond yields look set to roll over.

The take-home message is that homebuilder stocks are now marching in lockstep to the tune of the bond market. Once bond yields fully reverse, the odds favor a nice move up in homebuilder stocks.

Prepare for the next phase in the market. Join the smart money at JoeDuarteInTheMoneyOptions.com where I have just added five homebuilder stocks to the model portfolios. You can have a look at my latest recommendations FREE with a two week trial subscription. For frequent updates on real estate and housing, click here.

The Market’s Breadth Shows Signs of Stabilizing

The NYSE Advance Decline line (NYAD) fell below its 200-day moving average last week, but cemented its oversold status based on its most recent RSI reading near 30. Of some comfort is that the fledgling bottom in NYAD is developing near its recent March and May bottoms.

The Nasdaq 100 Index (NDX) has survived multiple tests of the 14500-15000 support area. ADI and OBV are both bouncing, which means short covering (ADI) and buying (OBV) are occurring simultaneously.

The S&P 500 (SPX) found support just below 4250 and looks set to test the resistance levels near the 20 and 50-day moving averages in the near future. ADI is rising as short sellers cover their positions. If OBV turns up, it will be even more bullish.

VIX Remains Below 20

As it has done for the past few weeks during which the market has corrected, VIX has remained stubbornly below the 20 area. A move above 20 would be very negative.

When the VIX rises, stocks tend to fall, as rising put volume is a sign that market makers are selling stock index futures to hedge their put sales to the public. A fall in VIX is bullish, as it means less put option buying, and it eventually leads to call buying, which causes market makers to hedge by buying stock index futures. This raises the odds of higher stock prices.

Liquidity Continues to Tighten

Liquidity is tightening. The Secured Overnight Financing Rate (SOFR), is an approximate sign of the market’s liquidity. It remains near its recent high in response to the Fed’s move and the rise in bond yields. A move below 5.0 would be bullish. A move above 5.5% would signal that monetary conditions are tightening beyond the Fed’s intentions, which would be very bearish.


To get the latest information on options trading, check out Options Trading for Dummies, now in its 4th Edition—Get Your Copy Now! Now also available in Audible audiobook format!

#1 New Release on Options Trading!

Good news! I’ve made my NYAD-Complexity – Chaos chart (featured on my YD5 videos) and a few other favorites public. You can find them here.

Joe Duarte

In The Money Options


Joe Duarte is a former money manager, an active trader, and a widely recognized independent stock market analyst since 1987. He is author of eight investment books, including the best-selling Trading Options for Dummies, rated a TOP Options Book for 2018 by Benzinga.com and now in its third edition, plus The Everything Investing in Your 20s and 30s Book and six other trading books.

The Everything Investing in Your 20s and 30s Book is available at Amazon and Barnes and Noble. It has also been recommended as a Washington Post Color of Money Book of the Month.

To receive Joe’s exclusive stock, option and ETF recommendations, in your mailbox every week visit https://joeduarteinthemoneyoptions.com/secure/order_email.asp.

Source link

#Follow #Smart #Money #Technology #Homebuilder #Stocks #Loved #Weeks #Reversal #Bond #Yields

The big AI and robotics concept that has attracted both Walmart and Softbank

Symbotic technology in use at a Walmart facility.

Courtesy: Walmart

Venture-capital giant Softbank notched a $15 billion-plus gain on its 2016 deal to buy Arm Holdings when the artificial intelligence-enabling semiconductor firm went public last month. But not as many investors know about Softbank’s “other” big AI investment, Wilmington, Mass.-based software and robotics maker Symbotic, which Walmart has taken a big stake in itself.

That may soon change.

Symbotic, a company that has already generated market heat selling AI-powered robotic warehouse management systems to clients including Walmart, Target and Albertson’s, is partnering with Softbank to play in a potentially giant and transformative market. The two are teaming up in a joint venture called GreenBox Systems which promises to deliver AI-powered logistics and warehousing to much smaller companies, delivering it as a service in facilities different companies share. They say it’s a $500 billion market, and an example of the kind of change AI can bring to the economy at large.

If it works, GreenBox will reach companies that could never afford the multi-million dollar required investment, in the same way cloud computing puts high-end information tech within reach, said Dwight Klappich, an analyst at technology research firm Gartner.

“I’ve seen a lot of robotics tech and I’ve never seen anything like it in my life,” TD Cowen analyst Joseph Giordano said. “Compared to what it replaces, it’s like day and night.” 

Erasing memories of a big WeWork real estate blunder

It might even mute the memory of Softbank’s most disastrous commercial real estate management investment ever, the notorious office-sharing company WeWork. 

Like WeWork, GreenBox is a promise to fuse technology and real estate. Indeed, its  sales pitch of “warehouse as a service” recalls the “space as a service” slogan in WeWork’s 2019 IPO prospectus almost exactly. The big difference: with WeWork, outside analysts struggled to identify what technological advantage WeWork ever offered clients over working at home or in traditional offices, let alone one that justified its peak valuation of $47 billion. WeWork today is worth under $150 million and is now under bankruptcy watch as it warned in August of its potential inability to remain “a going concern,” and more recently stopped making interest payments on debt, asking lenders to negotiate.

At GreenBox, the technology is the whole point, Giordano said. And unlike WeWork, which wanted people to change the way they used offices, Symbotic and GreenBox are out to let companies that already run warehouses boost efficiency and profits, he said. 

“Contract warehousing exists today – but those operations are mostly manual,” said Robert W. Baird analyst Rob Mason.

Softbank, perhaps not surprisingly, doesn’t like the WeWork analogy even being mentioned, with spokesperson Kristin Schwarz declining an interview request for Vikas Parekh, Softbank’s representative on Symbotic’s board (Parekh is also on WeWork’s board), after the firm learned CNBC would ask about it.

“If we are to put Vikas on the record for this, the interview would need to stay focused on GreenBox, and not on any other SoftBank topics,” Schwarz wrote in an e-mail. 

Softbank owns more than 8% of Symbotic, according to data from Robert W. Baird, and took it public through a special purpose acquisition company last year. Softbank also owns 65% of the GreenBox venture, which launched with $100 million in investment by the two companies. Walmart owns another 11% of Symbotic, according to a proxy statement from the robotics company, and is by far its biggest customer until the GreenBox venture ramps up, accounting for almost 90% of revenue.

“We share the same vision of going big and going fast,” Symbotic CEO Rick Cohen said. “We believe this market is massive.”

Symbotic has generated stock-market excitement even before the GreenBox deal. Its shares are up 190% this year. Sales in its most recent quarter climbed 77%, and orders for its existing warehouse-management systems jumped to $12 billion – a backlog it would take the company years to fulfill  Add in the $11 billion of Symbotic software and follow-on services GreenBox committed to buy over six years in July, and that backlog soars to $23 billion for a company that expects its first billion-dollar revenue year in fiscal 2023, and to break even on an EBITDA basis for the first time as a public company in the fourth quarter.

The best indication of the future may be from Walmart, which bought its Symbotic stake as part of the companies’ deal to automate the retailer’s 42 U.S. regional distribution centers for packaged consumer goods.

The product is the reason why, analysts say. 

At prices of $25 million to hundreds of millions, according to a conference call Symbotic held with analysts in July, a Symbotic system blends as many as dozens of autonomous robots that scoot around warehouses at speeds up to 25 mph, moving and unloading boxes from pallets and picking orders with AI software that optimizes where in a warehouse to put individual cases of goods, and lets boxes be packed to the warehouse’s ceiling, Giordano said, wasting much less space in the building. 

The system works something like a disk drive that uses intelligence to store data efficiently and retrieve the right data on demand – but with boxes of stuff. And a large warehouse can use several different systems, piling up the required investment to get moving.

Because Symbotic’s system can track inventory down to the case easily, where stuff is put can be matched much more easily to incoming orders, making it possible to more fully automate order picking. It can also match the design of outgoing pallets to the layout of the store the pallet is headed to, speeding up unloading and shelf stocking, Klappich said. 

But the biggest innovation the tech allows is in business models, rather than in technology itself. That hasn’t spread outside of giant companies yet, but Giordano and Mason say they think it will.

The AI’s precision will let multiple companies share the same warehouse, and even commingle their goods for efficient shipping without confusion, much as cloud computing lets multiple clients share the same computer servers, Mason said. 

“Through sharing infrastructure, you can get out of the infrastructure business and focus on what’s important to you,” Klappich said. “Larger-scale automation without the capital expense has been a challenge.”

Born out of stealth work with Walmart, minting a multi-billionaire

The idea grew out of a vision Cohen had when running his family’s grocery distribution company, C&S Wholesale Grocery, which he has grown to $33 billion in annual revenue from $14 million since 1974.  Symbotic was founded in 2006, and worked in stealth mode for years while refining its prototypes with Walmart. 

“I’ve spent my whole life in the outsourcing and [logistics] business with C&S, so, this — the ability to run warehouses for people — has always been on the plate, Cohen said in the July analyst call. “We said we’re going to take care of Walmart first. …We are now starting to say, I think we can do more.”

Symbotic and C&S have made the 71-year old Cohen one of America’s richest men, with a net worth hovering around $15.9 billion, according to Forbes. 

Symbotic teamed up with Softbank to build GreenBox in order to preserve its own capital, Cohen told analysts. The joint venture was initially capitalized 65% by Softbank and 35% by Symbotic, for a total of $100 million. Analysts say the venture will require much more capital, possibly raised by having GreenBox itself borrow money in the bond market. Symbotic said it will use its share of the profits from sales to GreenBox to keep its equity stake in the joint venture around 35%.

“The question has been, who has the capital to set it all up?” Klappich said. “Softbank could be the key because they have deep pockets.”

The joint venture will buy software from Symbotic, then turn around and sell the warehouse space, equipment and related services as a package to tenants. 

Many questions remain, and potential threats from Amazon, private equity

Much else about the new company remains unknown, beginning with the identity of its not-yet-announced chief executive, Mason said. The venture could either develop warehouses or rent them, though Symbotic said it will probably mostly rent them. Pricing for the warehouse-as-a-service is undisclosed. 

But the rise of Greenbox more than doubles Symbotic’s potential market, and nearly doubles its backlog. Symbotic has said that its total market is about $432 billion, a figure chief strategy officer Bill Boyd repeated on the conference call when the GreenBox alliance was announced.  Early adopters will be in businesses like grocery and packaged goods, with Symbotic expanding into pharmaceuticals and electronics over time, according to Symbotic’s annual federal regulatory filing this year.

The GreenBox market for smaller companies shapes up as another $500 billion of possible demand, Gartner’s Klappich said. The estimates are based on the number of warehouses in those industries, the likely percentage of warehouses in each whose owners can afford the technology, either independently or through GreenBox, and the average price of Symbotic-like systems. 

The third quarter of the company’s fiscal year, which ends in October, illustrates how the company’s profits might scale. Revenue jumped 77% to $312 million, and its loss before interest, taxes and non-cash depreciation and amortization expenses shrank to $3 million. Mason says the company will turn profitable on an EBITDA basis in the fiscal year that begins this fall, before orders from GreenBox begin, and EBITDA will be “in the mid-teens” as a percent of sales by the following year.

Clients stand to save money all the way through the warehouse, Klappich said.

Giordano estimated the savings at eight hours of labor per outgoing truck. The technology can also cut space rental costs by allowing goods to be packed closer together and stacked higher. 

Using the facility as a service will let seasonal companies cut back on the space and robot time they use during slow periods, rather than carry them all year. The warehouse should run with many fewer workers, Giordano said. And GreenBox will pay for upgrades to robots and software every few years, rather than making tenants invest more, he said.

Walmart led investors on a tour of its Brooksville, Fla. warehouse in April, and said technology investments like the Symbotic alliance will let profits grow faster than sales. More than half of distribution volume will move through automated centers within three years, improving unit costs by about 20% as two-thirds of stores are served by automated systems. The company has said little about the impact on jobs, but CEO Doug McMillon said overall employment should stay about the same size but shift toward delivery from warehouse roles. 

Competition will be arriving soon enough, analysts say. Building something like Symbotic, and especially moving it down into the realm where companies other than global giants can afford it, takes a combination of technology, money and vision, Klappich said. 

Amazon could expand into the space, using its warehousing expertise in a service that resembles its Web hosting business model, or private-equity firms awash in investable cash might acquire combinations of companies to produce competing products and business models, Klappich said.

For Softbank, the payoff if GreenBox works is potentially huge. Analysts on average project Symbotic shares to rise another 53% in the next year after pulling back amid recent recession fears, according to ratings aggregator TipRanks. With post-IPO estimates arguing that Arm shares will stagnate, and taking into account that Softbank paid a reported $36 billion for Arm in 2016, it’s possible Symbotic will be the bigger win in the end, at least on a percentage basis, as the 65% share of GreenBox rises in value.

Source link

#big #robotics #concept #attracted #Walmart #Softbank

‘Counterfeit people’: The dangers posed by Meta’s AI celebrity lookalike chatbots

Meta announced on Wednesday the arrival of chatbots with personalities similar to certain celebrities, with whom it will be possible to chat. Presented as an entertaining evolution of ChatGPT and other forms of AI, this latest technological development could prove dangerous.

Meta (formerly known as Facebook) sees these as “fun” artificial intelligence. Others, however, feel that this latest technological development could mark the first step towards creating “the most dangerous artefacts in human history”, to quote from American philosopher Daniel C. Dennett’s essay about “counterfeit people”

On Wednesday, September 27, the social networking giant announced the launch of 28 chatbots (conversational agents), which supposedly have their own personalities and have been specially designed for younger users. These include Victor, a so-called triathlete who can motivate “you to be your best self”, and Sally, the “free-spirited friend who’ll tell you when to take a deep breath”.

Internet users can also chat to Max, a “seasoned sous chef” who will give you “culinary tips and tricks”, or engage in a verbal joust with Luiz, who “can back up his trash talk”. 

A chatbot that looks like Paris Hilton

To reinforce the idea that these chatbots have personalities and are not simply an amalgam of algorithms, Meta has given each of them a face. Thanks to partnerships with celebrities, these robots look like American jet-setter and DJ Paris Hilton, TikTok star Charli D’Amelio and American-Japanese tennis player Naomi Osaka.

Read moreShould we worry? ChatGPT passes Ivy League business exam

And that’s not all. Meta has opened Facebook and Instagram accounts for each of its conversational agents to give them an existence outside chat interfaces and is working on giving them a voice by next year. The parent company of Mark Zuckerberg‘s empire was also looking for screenwriters who can “write character, and other supporting narrative content that appeal to wide audiences”.

Meta may present these 28 chatbots as an innocent undertaking to massively distract young internet users, but all these efforts point towards an ambitious project to build AIs that resemble humans as much as possible, writes The Rolling Stone.  

This race to “counterfeit people” worries many observers, who are already concerned about recent developments made in large language model (LLM) research such as ChatGPT and Llama 2, its Facebook counterpart. Without going as far as Dennett, who is calling for people like Zuckerberg to be locked up, “there are a number of thinkers who are denouncing these major groups’ deliberately deceptive approach”, said Ibo van de Poel, professor of ethics and technology at the Delft University of Technology in the Netherlands.

AIs with personalities are ‘literally impossible’

The idea of conversational agents “with a personality is literally impossible”, said van de Poel. Algorithms are incapable of demonstrating “intention in their actions or ‘free will’, two characteristics that are considered to be intimately linked to the idea of a personality”.

Meta and others can, at best, imitate certain traits that make up a personality. “It must be technologically possible, for example, to teach a chatbot to act like the person they represent,” said van de Poel. For instance, Meta’s AI Amber, which is supposed to resemble Hilton, may be able to speak the same way as its human alter ego. 

The next step will be to train these LLMs to express the same opinions as the person they resemble. This is a much more complicated behaviour to programme, as it involves creating a sort of accurate mental picture of all of a person’s opinions. There is also a risk that chatbots with personalities could go awry. One of the conversational agents that Meta tested expressed “misogynistic” opinions, according to the Wall Street Journal, which was able to consult internal company documents. Another committed the “mortal sin” of criticising Zuckerberg and praising TikTok.

To build these chatbots, Meta explains that it set out to give them “unique personal stories”. In other words, these AIs’ creators have written biographies for them in the hopes that they will be able to develop a personality based on what they have read about themselves. “It’s an interesting approach, but it would have been beneficial to add psychologists to these teams to get a better understanding of personality traits”, said Anna Strasser, a German philosopher who was involved in a project to create a large language model capable of philosophising.

Meta’s latest AI project is clearly driven by a thirst for profit. “People will no doubt be prepared to pay to be able to talk and have a direct relationship with Paris Hilton or another celebrity,” said Strasser.

The more users feel like they are speaking with a human being, “the more comfortable they’ll feel, the longer they’ll stay and the more likely they’ll come back”, said van de Poel. And in the world of social media, time – spent on Facebook and its ads –  is money.

Tool, living thing or somewhere between?

It is certainly not surprising that Meta’s first foray into AI with “personality” are chatbots aimed primarily at teenagers. “We know that young people are more likely to be anthropomorphic,” said Strasser.

However, the experts interviewed feel that Meta is playing a dangerous game by stressing the “human characteristics” of their AIs. “I really would have preferred if this group had put more effort into explaining the limits of these conversational agents, rather than trying to make them seem more human”, said van de Poel.

Read moreChatGPT: Cybercriminals salivate over world-beating AI chatbot

The emergence of these powerful LLMs has upset “the dichotomy between what is a tool or object and what is a living thing. These ChatGPTs are a third type of agent that stands somewhere between the two extremes”, said Strasser. Human beings are still learning how to interact with these strange new entities, so by making people believe that a conversational agent can have a personality Meta is suggesting that it be treated more like another human being than a tool. 

“Internet users tend to trust what these AIs say” which make them dangerous, said van de Poel. This is not just a theoretical risk: a man in Belgium ended up committing suicide in March 2023 after discussing the consequences of global warming with a conversational agent for six weeks.

Above all, if the boundary between the world of AIs and humans is eventually blurred completely, “this could potentially destroy trust in everything we find online because we won’t know who wrote what”, said Strasser. This would, as Dennett warned in his essay, open the door to “destroying our civilisation. Democracy depends on the informed (not misinformed) consent of the governed [which cannot be obtained if we no longer know what and whom to trust]”.

It remains to be seen if chatting with an AI lookalike of Hilton means that we are on the path to destroying the world as we know it. 

This article has been translated from the original in French

Source link

#Counterfeit #people #dangers #posed #Metas #celebrity #lookalike #chatbots

Review: Sci-Fi Action ‘The Creator’ – This Movie Isn’t Really About A.I. | FirstShowing.net

Review: Sci-Fi Action ‘The Creator’ – This Movie Isn’t Really About A.I.

by Alex Billington
September 28, 2023

“Whose side are you on, huh?” Let’s get right into it – time to dig into this one… For the record, I’ve been a huge fan of Gareth Edwards ever since his first feature Monsters, writing a glowing review out of Cannes 2010 after catching a small screening. Now 13 years later he’s back with another original sci-fi movie titled The Creator, a big budget studio picture that is entirely his idea. The script is credited to Gareth Edwards and Chris Weitz, but Edwards gets the sole “Story by” credit on this movie. First things first, The Creator is visually astonishing and deserves to be seen on the big screen for the visuals alone. However, the rest of the movie feels rather empty, without much of a story besides another Lone Wolf and Cub rehash built into a bigger A.I. vs humans / America vs Asia world. Despite the movie being set in the future where humanoid robots are as common as regular fleshy human beings, it needs to be stated clearly – this movie isn’t actually about Artificial Intelligence at all. It’s really another Vietnam War tale turned into an action sci-fi spectacle.

In an era of reboots, sequels, adaptation, and remakes, it’s really, really nice to see a completely original sci-fi movie. And this stands out being so big and bold and fresh. In terms of the visuals and world-building, this movie is off the charts spectacular. In terms of the story and script, this movie is underwhelming. It’s a good movie but lacks depth exploring any themes beyond just the basics. There’s not even that much to talk about after. Here’s a guy, who you don’t know much about (as usual with John David Washington in a lead role), who lost his wife, who we also don’t know much about. He just wants her back. That’s the main plot of this movie, wrapped around the “but there’s also robots & America hates them” near-future context. Edwards’ world-building is phenomenal because he builds it all around what they shot. He has explained in interviews for The Creator that they went out and shot all of it, then came back and created everything else and designed the world to make it feel big and exciting. Yes, that is exciting, and it’s enjoyable and satisfying to watch, but by the time it was over I felt empty. Even trying to discuss it, what is there get into? Not much.

Once the movie gets going, there’s a reveal part of the way in that it’s essentially America against Southeast Asia. They don’t like this generic “New Asia”, as it is known in the future, because they’re friendly with the A.I. and have learned to integrate and live with them. Americans don’t like this A.I. because, well, something happened and a nuke exploded in Los Angeles and they blame the A.I. All this context is based not only on America’s response to 9/11, but also America’s involvement in the Vietnam War. That is really what the film is about – American imperialism and military might. They even have this gigantic aerial battle-station called “NOMAD“, which is also an obvious reference to the very real “NORAD” located in a mountain in Colorado Springs. Except this one can fly anywhere around the planet and destroy anything. Somehow the American’s have unlimited, unchallenged authority to go wherever and attack anyone with this ship. This is also an interesting reference to what happened during the building of the atomic bomb – and why some scientists leaked info to the Russians, because they didn’t believe America should have complete, monopolistic control over the ultimate weapon. In this movie, they do. However, the how & why of this thing is left unexplained.

Going in to watch The Creator, I was thinking wow it’s impressive how Gareth Edwards was able to capture the zeitgeist of 2023 with its eerily relevant story about Artificial Intelligence. As everyone knows, A.I. has taken over the tech world in 2023, including in Hollywood – the WGA and SAG-AFTRA strikes are partially about A.I. and how they will use it. However, watching this movie I realized – it doesn’t actually tap into the zeitgeist at all. Robots and A.I. have been a major part of sci-fi storytelling for decades. All of the conversations around The Creator involve everyone projecting 2023 thoughts about A.I. onto a movie that was conceived of years ago and filmed in early 2022. I’ve heard critics wondering if the movie is supposed to make us wonder if we should bow down to our benevolent A.I. overlords, instead of be against them (as is happening in the real world in 2023) because it depicts them as being so kind. One of them even says at one point that they would never harm humans, it’s not in their coding to do so (a reference to the first law of Isaac Asimov’s “Three Laws of Robotics“). But thinking this is missing the point of the entire movie. These robots aren’t really the same as A.I. in 2023, they’re actually foreigners: Vietnamese, Thai, and other Asians.

The Creator Review

The movie’s actual commentary and concept is about America’s perceived ultimate superiority and desire to eradicate anything it wants or deems a “threat”, including going to over to countries in Southeast Asia, like Vietnam, and killing Vietnamese people (estimates are that upwards of 3 million Vietnamese were killed in the Vietnam War). In The Creator, the “threat” is A.I., but only because the A.I. are supposedly responsible for the nuclear explosion in Los Angeles. Instead of the movie being about Artificial Intelligence, it’s using A.I. robots as the main metaphor for striking commentary on America’s militaristic ego and imperialism. It just so happens that A.I. became a major topic in 2023 and thus the movie found the right time to debut in theaters. Aside from the world-building, there is no actual dialogue or conversations or commentary in the movie about A.I. and what it means and how it works. The simple question of, can we co-exist with A.I., is a remarkably common question in most sci-fi; it’s something that almost every sci-fi writer over the last 60 years has pondered and considered in their work. This movie adds absolutely nothing to that conversation.

Ultimately, The Creator is just another sci-fi action spectacle with lots of guns and explosions that could use a much better script. Many of the quieter scenes where Joshua is laying low waiting for the next attack could have become scenes where they discuss Artificial Intelligence, technology, and the ‘bots that are everywhere. There’s not much backstory or explanation as to how the robots came to power, how they got blamed for the nuclear explosion in Los Angeles, how they got so advanced, how New Asia successfully integrated them into their society, how they learned how to be human-like, who invented them, who profits off of building them, etc. Movies like The Matrix and Blade Runner explore these topics right in the plot, but The Creator does not – it wants to be more of a cinematic experience based on sci-fi aesthetic than anything else. As for the script, it’s entirely about the connection between Joshua and the young robot he finds and names Alphie (played by Madeleine Yuna Voyles). The whole time, I could not stop thinking – this guy is a really bad dad. He never learns how to be a better caretaker of this kid, like Mando does in “The Mandalorian” series.

One of the movie’s highlights is Allison Janney starring as Colonel Howell, who is Joshua’s commanding officer from the government watching over the mission. Much like Stephan Lang’s iconic performance as the mean bastard Quaritch in Avatar, she’s another grizzled badass villain character in a big sci-fi movie that many will remember. While at first it may seem like unconventional casting, she handles the role with the right amount of grit and calm to stand out among the rest of the cast. There’s a scene in the first half where she connects with Joshua as they’re flying to Asia, as if she gets him and understands him. This bit of well-played empathy worked because it even got me, at first I thought she might end up being one of the “good guys.” Not much later we find out, oh right, she’s just another mean military tool whose jackboot mentality continues to threaten lives no matter where they go or what Joshua does to put an end to this dangerous pursuit. Madeleine’s performance as Alphie is also endearing, though at times she’s a bit hokey and phony.

If I’m honest, I do hope this movie ends up being a big hit anyway because it will mean good things for sci-fi and original storytelling. Hollywood needs to know a completely original creation like this is worth making and worth investing in, and movieogers will connect with it. That said, it’s far from the sci-fi masterpiece it could be, and I still must emphasize how simplistic and empty it is thematically. I wish was walking out of this movie engaging in deep philosophical discussions about Artificial Intelligence, but I’m not. Because it’s not about A.I., it’s about how America will exterminate anything it deems a threat to its way of life, without any desire to understand anything more beyond “they’re bad and we need to get rid of them.” Which is one of the lessons of this movie that is nice for humanity, but not right for real world tech. Earlier in 2023, one of the creators of modern A.I. left Google so he could “sound the alarm about A.I.” and its danger. This is the opposite of the message in The Creator because these are different conversations about different ideas, and we shouldn’t conflate the two. Enjoy this movie for how beautiful looks on screen, but not for anything else.

Alex’s Rating: 7.5 out of 10
Follow Alex on Twitter – @firstshowing / Or Letterboxd – @firstshowing

Share

Find more posts: Review, Sci-Fi



Source link

#Review #SciFi #Action #Creator #Movie #Isnt #FirstShowingnet

The AI wave: How Tamil cinema is embracing artificial intelligence tools

Senthil Nayagam has been besotted with actor Suriya for weeks. He has downloaded interviews and speeches of the actor, and has been feeding them to his many AI (artificial intelligence) tools in an attempt to “master Suriya’s voice”.

“Let’s try running it now,” says Senthil, rubbing his hands in glee as he quickly taps his laptop. He selects one of his favourite Ilaiyaraaja songs – ‘Nilave Vaa’, sung by SP Balasubrahmanyam, in the 1986 Tamil film Mouna Raagam – and presses some more keys.

Within a few seconds, the familiar strains of ‘Nilave Vaa’ play out, sung in Suriya’s distinct voice!

An AI-generated image of what could be the ultimate blockbuster: a film starring Rajinikanth and Kamal Haasan 
| Photo Credit:
Special Arrangement

All this stemmed from when Senthil asked himself a question a few months ago: Can I replace one person with another? Following that train of thought, he made the late SP Balasubramanyam sing the ‘Rathamaarey’ song, originally sung by Vishal Mishra, from Rajinikanth’s recent hit Jailer.

Senthil then went one step further; he used a face-swapping technique to replace Tamannaah with Simran for the foot-tapping ‘Kaavala’, Anirudh’s hit song from Jailer, a short video that generated more than 2 million views, especially after the two actresses also shared it.

For Senthil, who currently runs a generative AI company called Muonium Inc, AI is “a toy he is experimenting with”. Currently, he is making use of the technology to create a voice similar to AR Rahman’s to sing all the songs that he has composed, like ‘Usurey Poguthey’ from Raavanan, which was performed by singer Karthik. “This is actual work,” he admits, “We need to separate the instruments and the voice, clean any noise and then mix it back. I’ve got mixed feedback for my content, because some fans aren’t happy with the videos featuring people who have passed on. But audiences should understand that the possibilities with AI are exciting.”

It is. AI is slowly creeping into various facets of Tamil cinema, changing the way filmmakers vision and execute their projects. Like Senthil, there are many others dabbling in AI. Like Teejay-Sajanth Sritharan, for instance. A Srilankan Tamil living in the UK, he has also created voice models for many leading Tamil actors.

Using AI tools like Midjourney, Stable Diffusion, Chat GPT or a combination of Python and GPU, these AI creators are not just putting their work out there for audiences, but also collaborating with producers and directors.

Lyricist-dialogue writer Madhan Karky used multiple AI tools for his upcoming Suriya-starrer Kanguvafor concept designs and world building. “I also use AI as my writing assistant when I write stories or scenes. It saves a lot of time, because we have tools like Kaiber and Gen-2 that can create animation and lyric videos,” says the lyricist.

Suriya in a still from ‘Kanguva’

Suriya in a still from ‘Kanguva’
| Photo Credit:
Special Arrangement

Using a tool called SongR, Madhan created ‘En Mele’, the world’s first AI-composed Tamil song, now available on leading music streaming platforms. “I had to make it learn Tamil, which was very challenging,” he says, adding “In a few years, most AI tools will become well-versed in all languages in the world.”

Karky has another prediction: that, within a year, a movie completely generated using AI will release in theatres.

While that may take a while, in November, audiences will be able to watch about four minutes of AI-generated content in Tamil film Weapon, starring Sathyaraj. The makers decided to opt for this during the post-production stage, when they felt that a flashback portion would add value to the film. Using a software developed by the team, director Guhan Senniappan and team fed photos of the leads (Sathyaraj and Vasanth Ravi) to generate the sequences.

“It saves a lot of time,” says Guhan, who has previously worked on Sawaari (2016) and Vella Raja (2018), “Earlier, we would need a few days to create one frame, but with AI, we can experiment with four-five frames in a single day and get instant output. When you are working with strict deadlines, this is a boon. But, you need to input strong, accurate keywords for AI tools to generate the visuals you have in mind.”

Sathyaraj and director Guhan on the sets of Tamil film ‘Weapon’, which will feature an AI-generated portion

Sathyaraj and director Guhan on the sets of Tamil film ‘Weapon’, which will feature an AI-generated portion
| Photo Credit:
Special Arrangement

Expect AI to alter every stage of film making, including costumes. Mohamed Akram A of OrDrobe Apparels says that the Indian film industry will soon embrace AI to transform fashion in their projects and promotional activities. “Algorithms can be used to generate costume ideas that are not only visually stunning but also relevant to the storyline and character development,” he says. “Each character’s attire can be uniquely designed to reflect their personality, era, and storyline, enhancing the overall cinematic experience.”

OrDrobe, which debuts in film merchandising with their association with the makers of the upcoming Tamil film Nanban Oruvan Vandha Piragu, is keen to actively participate in the film space. “AI can also be used in fashion trend analysis and maintaining a digital wardrobe for characters, making it easier to recreate costumes for reshoots and to ensure character continuity throughout a film,” adds Mohamed.

The cost and time saved thanks to these processes might benefit producers in the long run.

While cash-rich producers can still opt for a big VFX team to do the job, that might not be a viable option for all, especially medium and small-scale film units. Using AI tools could help them achieve 80% of the desired output at one-third the cost spent on VFX, experts say.

But it does trigger a debate on ethics: English actor-comedian Stephen Fry recently lashed out at the makers of a historical documentary for faking his voice using AI.

Does this all mean that AI will someday substitute human intelligence and labour, even in the movies? No, feels Karky. “Human creativity does have an upper edge. Our experiences and the emotions we undergo are what makes us different.”

“AI empowers creators, if used properly. But human creativity does have an upper edge. Our experiences and the emotions we undergo make us different.  ”Madhan KarkyLyricist, Dialogue-Writer

“Using AI tools, you can achieve 80 percent of the same output at one-third the cost that you would spend on VFX.”Senthil NayagamAI creator

Source link

#wave #Tamil #cinema #embracing #artificial #intelligence #tools

AI is policing the package theft beat for UPS as ‘porch piracy’ surge continues across U.S.

A doorbell camera in Chesterfield, Virginia, recently caught a man snatching a box containing a $1,600 new iPad from the arms of a FedEx delivery driver. Barely a day goes by without a similar report. Package theft, often referred to as “porch piracy,” is a big crime business.

While the price tag of any single stolen package isn’t extreme — a study by Security.org found that the median value of stolen merchandise was $50 in 2022 — the absolute level of package theft is high and rising. In 2022, 260 million delivered packages were stolen, according to home security consultant SafeWise, up from 210 million packages the year before. All in all, it estimated that 79% of Americans were victims of porch pirates last year.

In response, some of the big logistics companies have introduced technologies and programs designed to stop the crime wave. One of the most recent examples set to soon go into wider deployment came in June from UPS, with its API for DeliveryDefense, an AI-powered approach to reducing the risk of delivery theft. The UPS tech uses historic data and machine learning algorithms to assign each location a “delivery confidence score,” which is rated on a one to 1,000 scale.

“If we have a score of 1,000 to an address that means that we’re highly confident that that package is going to get delivered,” said Mark Robinson, president of UPS Capital. “At the other end of the scale, like 100 … would be one of those addresses where it would be most likely to happen, some sort of loss at the delivery point,” Robinson said.

Powered by artificial intelligence, UPS Capital’s DeliveryDefense analyzes address characteristics and generates a ‘Delivery Confidence Score’ for each address. If the address produced a low score, then a package recipient can then recommend in-store collection or a UPS pick-up point. 

The initial version was designed to integrate with the existing software of major retailers through the API —a beta test has been run with Costco Wholesale in Colorado. The company declined to provide information related to the Costco collaboration. Costco did not return a request for comment.

DeliveryDefense, said Robinson, is “a decent way for merchants to help make better decisions about how to ship packages to their recipients.”

To meet the needs of more merchants, a web-based version is being launched for small- and medium-sized businesses on Oct. 18, just in time for peak holiday shipping season.

UPS says the decision about delivery options made to mitigate potential issues and enhance the customer experience will ultimately rest with the individual merchant, who will decide whether and how to address any delivery risk, including, for example, insuring the shipment or shipping to a store location for pickup.

UPS already offers its Access Points program, which lets consumers have packages shipped to Michaels and CVS locations to ensure safe deliveries.

How Amazon, Fedex, DHL attempt to prevent theft

UPS isn’t alone in fighting porch piracy.

Among logistics competitors, DHL relies on one of the oldest methods of all — a “signature first” approach to deliveries in which delivery personnel are required to knock on the recipient’s door or ring the doorbell to obtain a signature to deliver a package. DHL customers can opt to have shipments left at their door without a signature, and in such cases, the deliverer takes a photo of the shipment to provide proof for delivery. A FedEx rep said that the company offers its own picture proof of delivery and FedEx Delivery Manager, which lets customers customize their delivery preferences, manage delivery times and locations, redirect packages to a retail location and place holds on packages.

Amazon has several features to help ensure that packages arrive safely, such as its two- to four-hour estimated delivery window “to help customers plan their day,” said an Amazon spokesperson. Amazon also offers photo-on delivery, which offers visual delivery confirmation and key-in-garage Delivery, which lets eligible Amazon Prime members receive deliveries in their garage.

Debate over doorbell cameras

Amazon has also been known for its attempts to use new technology to help prevent piracy, including its Ring doorbell cameras — the gadget maker’s parent company was acquired by the retail giant in 2018 for a reported $1 billion.

Camera images can be important when filing police reports, according to Courtney Klosterman, director of communications for insurer Hippo. But the technology has done little to slow porch piracy, according to some experts who have studied its usage.

“I don’t personally think it really prevents a lot of porch piracy,” said Ben Stickle, a professor at Middle Tennessee State University and an expert on package theft.

Recent consumer experiences, including the iPad theft example in Virginia, suggest criminals may not fear the camera. Last month, Julie Litvin, a pregnant woman in Central Islip, N.Y., watched thieves make off with more than 10 packages, so she installed a doorbell camera. She quickly got footage of a woman stealing a package from her doorway after that. She filed a police report, but said her building’s management company didn’t seem interested in providing much help.

Stickle cited a study he conducted in 2018 that showed that only about 5% of thieves made an effort to hide their identity from the cameras. “A lot of thieves, when they walked up and saw the camera, would simply look at it, take the package and walk away anyway,” he said. 

SafeWise data shows that six in 10 people said they’d had packages stolen in 2022. Rebecca Edwards, security expert for SafeWise, said this reality reinforces the view that cameras don’t stop theft. “I don’t think that cameras in general are a deterrent anymore,” Edwards said.

The best delivery crime prevention methods

The increase in packages being delivered has made them more enticing to thieves. “I think it’s been on the rise since the pandemic, because we all got a lot more packages,” she said. “It’s a crime of opportunity, the opportunity has become so much bigger.”

Edwards said that the two most-effective measures consumers can take to thwart theft are requiring a signature to leave a package and dropping the package in a secure location, like a locker.

Large lockboxes start at around $70 and for the most sophisticated can run into the thousands of dollars.

Stickle recommends a lockbox to protect your packages. “Sometimes people will call and say ‘Well, could someone break in the box? Well, yeah, potentially,” Stickle said. “But if they don’t see the item, they’re probably not going to walk up to your house to try and steal it.”

There is always the option of leaning on your neighbors to watch your doorstep and occasionally sign for items. Even some local police departments are willing to hold packages.

The UPS AI comes at a time of concerns about rapid deployment of artificial intelligence, and potential bias in algorithms.

UPS says that DeliveryDefense relies on a dataset derived from two years’ worth of domestic UPS data, encompassing an extensive sample of billions of delivery data points. Data fairness, a UPS spokeswoman said, was built into the model, with a focus “exclusively on delivery characteristics,” rather than on any individual data. For example, in a given area, one apartment complex has a secure mailroom with a lockbox and chain of custody, while a neighboring complex lacks such safeguards, making it more prone to package loss.

But the UPS AI is not free. The API starts at $3,000 per month. For the broader universe of small businesses that are being offered the web version in October, a subscription service will be charged monthly starting at $99, with a variety of other pricing options for larger customers.

Source link

#policing #package #theft #beat #UPS #porch #piracy #surge #continues