Biden and Xi discuss Taiwan, AI and fentanyl in a push to return to regular leader talks

U.S. President Joe Biden and Chinese President Xi Jinping discussed Taiwan, artificial intelligence and security issues on Tuesday in a call meant to demonstrate a return to regular leader-to-leader dialogue between the two powers.

The call, described by the White House as “candid and constructive,” was the leaders’ first conversation since their November summit in California produced renewed ties between the two nations’ militaries and a promise of enhanced cooperation on stemming the flow of deadly fentanyl and its precursors from China.

Also read | China tells U.S. will ‘never compromise’ on Taiwan

Mr. Xi told Mr. Biden that the two countries should adhere to the bottom line of “no clash, no confrontation” as one of the principles for this year.

“We should prioritize stability, not provoke troubles, not cross lines but maintain the overall stability of China-U.S. relations,” Mr. Xi said, according to China Central Television, the state broadcaster.

The roughly 105 minute call kicks off several weeks of high-level engagements between the two countries, with Treasury Secretary Janet Yellen set to travel to China on Thursday and Secretary of State Antony Blinken to follow in the weeks ahead.

Mr. Biden has pressed for sustained interactions at all levels of government, believing it is key to keeping competition between the two massive economies and nuclear-armed powers from escalating to direct conflict. While in-person summits take place perhaps once a year, officials said, both Washington and Beijing recognise the value of more frequent engagements between the leaders.

The two leaders discussed Taiwan ahead of next month’s inauguration of Lai Ching-te, the island’s President-elect, who has vowed to safeguard its de-facto independence from China and further align it with other democracies. Mr. Biden reaffirmed the United States’ longstanding “One China” policy and reiterated that the U.S. opposes any coercive means to bring Taiwan under Beijing’s control. China considers Taiwan a domestic matter and has vigorously protested U.S. support for the island.

Taiwan remains the “first red line not to be crossed,” Mr. Xi told Mr. Biden, and emphasised that Beijing will not tolerate separatist activities by Taiwan’s independence forces as well as “exterior indulgence and support,” which alluded to Washington’s support for the island.

Mr. Biden also raised concerns about China’s operations in the South China Sea, including efforts last month to impede the Philippines, which the U.S. is treaty-obligated to defend, from resupplying its forces on the disputed Second Thomas Shoal.

Next week, Mr. Biden will host Philippines President Ferdinand Marcos Jr. and Japanese Prime Minister Fumio Kishida at the White House for a joint summit where China’s influence in the region was set to be top of the agenda.

Mr. Biden, in the call with Mr. Xi, pressed China to do more to meet its commitments to halt the flow of illegal narcotics and to schedule additional precursor chemicals to prevent their export. The pledge was made at the leaders’ summit held in Woodside, California, last year on the margins of the Asia-Pacific Economic Cooperation meeting.

At the November summit, Biden and Xi also agreed that their governments would hold formal talks on the promises and risks of advanced artificial intelligence, which are set to take place in the coming weeks. The pair touched on the issue on Tuesday just two weeks after China and the U.S. joined more than 120 other nations in backing a resolution at the United Nations calling for global safeguards around the emerging technology.

Mr. Biden, in the call, reinforced warnings to Mr. Xi against interfering in the 2024 elections in the U.S. as well as against continued malicious cyberattacks against critical American infrastructure.

He also raised concerns about human rights in China, including Hong Kong’s new restrictive national security law and its treatment of minority groups, and he raised the plight of Americans detained in or barred from leaving China.

The Democratic president also pressed China over its defense relationship with Russia, which is seeking to rebuild its industrial base as it presses forward with its invasion of Ukraine. And he called on Beijing to wield its influence over North Korea to rein in the isolated and erratic nuclear power.

As the leaders of the world’s two largest economies, Mr. Biden also raised concerns over China’s “unfair economic practices,” National Security Council spokesman John Kirby said, and reasserted that the U.S. would take steps to preserve its security and economic interests, including by continuing to limit the transfer of some advanced technology to China.

Mr. Xi complained that the U.S. has taken more measures to suppress China’s economy, trade and technology in the past several months and that the list of sanctioned Chinese companies has become ever longer, which is “not de-risking but creating risks,” according to the broadcaster.

Yun Sun, director of the China program at Stimson Center, said the call “does reflect the mutual desire to keep the relationship stable” while the men reiterated their longstanding positions on issues of concern.

The call came ahead of Yellen’s visit to Guangzhou and Beijing for a week of bilateral meetings on the subject with finance leaders from the world’s second largest economy — including Vice Premier He Lifeng, Chinese Central Bank Gov. Pan Gongsheng, former Vice Premier Liu He, American businesses and local leaders.

An advisory for the upcoming trip states that Ms. Yellen “will advocate for American workers and businesses to ensure they are treated fairly, including by pressing Chinese counterparts on unfair trade practices.”

It follows Mr. Xi’s meeting in Beijing with U.S. business leaders last week, when he emphasized the mutually beneficial economic ties between the two countries and urged people-to-people exchange to maintain the relationship.

Mr. Xi told the Americans that the two countries have stayed communicative and “made progress” on issues such as trade, anti-narcotics and climate change since he met with Mr. Biden in November. Last week’s high-profile meeting was seen as Beijing’s effort to stabilize bilateral relations.

Ahead of her trip to China, Ms. Yellen last week said that Beijing is flooding the market with green energy that “distorts global prices.” She said she intends to share her beliefs with her counterparts that Beijing’s increased production of solar energy, electric vehicles and lithium-ion batteries poses risks to productivity and growth to the global economy.

U.S. lawmakers’ renewed angst over Chinese ownership of the popular social media app TikTok has generated new legislation that would ban TikTok if its China-based owner ByteDance doesn’t sell its stakes in the platform within six months of the bill’s enactment.

As chair of the Committee on Foreign Investment in the U.S., which reviews foreign ownership of firms in the U.S., Ms. Yellen has ample leeway to determine how the company could remain operating in the U.S.

Meanwhile, China’s leaders have set a goal of 5% economic growth this year despite a slowdown exacerbated by troubles in the property sector and the lingering effects of strict anti-virus measures during the COVID-19 pandemic that disrupted travel, logistics, manufacturing and other industries.

China is the dominant player in batteries for electric vehicles and has a rapidly expanding auto industry that could challenge the world’s established carmakers as it goes global.

The U.S. last year outlined plans to limit EV buyers from claiming tax credits if they purchase cars containing battery materials from China and other countries that are considered hostile to the United States. Separately, the Department of Commerce launched an investigation into the potential national security risks posed by Chinese car exports to the U.S.

Source link

#Biden #discuss #Taiwan #fentanyl #push #return #regular #leader #talks

‘A fight for your way of life’: Lithuania’s culture minister on Ukraine and Russian disinformation

Lithuania’s Minister of Culture Simonas Kairys spoke to FRANCE 24 about Lithuania’s fight against Russian disinformation and why the Baltic nation feels so bound to Ukraine.

Issued on:

5 min

In March 1990, Lithuania became the first nation to declare its independence as the Soviet Union collapsed, setting an example for other states that had been under the Kremlin’s influence for half a century. As a nascent democracy emerging from Soviet control, Lithuania was free to rediscover its own history and culture.

But Vilnius has once again become a target for Moscow. Russian President Vladimir Putin has long considered the demise of the Soviet Union as a historical tragedy in which Russians were innocent victims. As part of efforts to justify the February 2022 invasion of Ukraine, Russia has launched a disinformation campaign aimed at Kyiv’s allies in the West.

In addition to putting pressure on Ukraine’s supporters, the Kremlin has attempted to intimidate them. Russian authorities placed Lithuanian Culture Minister Simonas Kairys, Estonian Prime Minister Kaja Kallas and others on a wanted list in February along with other Baltic officials for allowing municipalities to dismantle WWII-era monuments to Soviet soldiers, moves seen by Moscow as “an insult to history”. 

Upon being informed his name was listed, Culture Minister Kairys was insouciant. “I’m glad that my work in dismantling the ruins of Sovietisation has not gone unnoticed,” he said.

Read moreThe Kremlin puts Baltic leaders on ‘wanted’ list

FRANCE 24 spoke to Kairys on why it is vital to fight Russian propaganda, and why the Baltic state feels so invested in what is happening in Ukraine.

This interview has been lightly edited for length and clarity. 

What historical narratives has Russia tried to distort when it comes to Lithuanian independence?

Simonas Kairys: Russia is still in “imperialism” mode. The way they inscribed me onto their wanted list shows that they think and act upon the belief that countries that were formerly part of the Soviet Union – sovereign and independent countries such as Lithuania – are still part of Russia.

Russia has its own law system, which – from their point of view – is [the law even] in free countries (in the Russian criminal code, “destroying monuments to Soviet soldiers” is an act punishable by a five-year prison term). It’s absurd and unbelievable how they interpret the current situation in the world. If they say, for example, that they are “protecting” objects of Soviet heritage in a foreign country like Lithuania, they are spreading their belief that it is not a free country. But we are not slaves, and we are taking this opportunity to be outspoken and say Russia is promoting a fake version of history.

Why is combating Russian disinformation essential for Lithuanian national security?

It is not important for Lithuania – it is important for the EU, for Europe and for the entire free world. The war in Ukraine is happening very near to the EU; it is happening only a few hours away from France. Culture, heritage [and] historical memory are also fields of combat. Adding me to their wanted list is just one example of this. When we see how Russia is falsifying not only history but all information, it’s important to speak about it very loudly. Lithuania has achieved a lot in this domain, along with Ukraine and France. 

When France had the [rotating, six-month] presidency of the EU [in early 2022], we made several joint declarations. The result was that we signed a sixth package of sanctions against Russia and we designated six Russian television channels to be blocked in the EU – this was the first step in considering information as a [weapon]. In other words, information is being used by Russia to convince their society and sway public opinion in other European countries. Now we have a situation in which we are blocking Russian television channels in EU territory.  

Our foreign partners often ask us upon which criteria Russian information can be considered as disinformation. These days, it’s very important to stress that any information – from television shows to news to other television productions – coming from Russia is automatically disinformation, propaganda and fake news. We must understand that there is no truth in what Russia tries to say.

This fight against disinformation is crucial because we are in a phase of big developments in technology and artificial intelligence. We have to ensure that our societies will be prepared, be capable of critical thinking, and understand what is happening in the world right now.


Olympic and world champion Ruta Meilutyte swims across a pond colored red to signify blood, in front of the Russian embassy in Vilnius, Lithuania, Wednesday, April 6, 2022. © Andrius Repsys, AP

To borrow a term from Czech writer Milan Kundera, would you say that Lithuania was “kidnapped from the West” when it was annexed by the Soviet Union in 1940?

During the Middle Ages, the Grand Duchy of Lithuania spanned from the Baltic Sea to the Black Sea. We were the same country as Poland, Ukraine and Belarus. We were oriented to the West and not the East. In much older times, during the Kievan Rus period, Moscow didn’t even exist; there were just swamps and nothing more. But with [growing] imperialism from the Russian side, they began portraying history in a different way. Yet our memory is like our DNA, our freedom and orientation are ingrained. The eastern flank of the EU is currently talking about the values of Western civilisation much more emphatically than in the past.

[During the Cold War] not only was our freedom taken but [Russia] tried to delete history and paint a picture only from the time when this imperialism entered our territory. But we remembered what happened in the Middle Ages; we remember how modern Lithuanian statehood arose after World War I and how we regained our freedom in 1990. It’s impossible to delete this memory and name Lithuania as a country that isn’t free. Once you take a breath of freedom, you never forget it. This is the reason why we understand Ukrainians and why we are so active to not only defend the territory of Ukraine, but also the values of Western civilisation as well.   

How has the war in Ukraine influenced Lithuanian life and culture?

The main thing is to think about freedom; we have to do a lot because of that freedom, we have to fight for freedom … we understand more and more that culture plays a big role in this war, because it is based on culture and history. You can see what Putin is declaring and it is truly evident that culture, heritage and historical memory are used as the basis for an explanation of why Russia is waging war in Ukraine right now. (To justify the invasion of Ukraine, Putin has insisted that Russians and Ukrainians are one people and uniting them is a historical inevitability.) 

There are important collaborations taking place with Ukrainian culture and artists. It’s important to give them a platform – for everyone to see that Ukraine is not defeated, that Ukraine is still fighting, that Ukraine will win, that we will help them. 

The best response to an aggressor is to live your daily life, with all your traditions, habits and cultural legacy. This fight is also for your way of life. The situation is not one where you must stop and only think about guns and systems of defence – you have to live, work, create, and keep up your business and cultural life. 

Source link

#fight #life #Lithuanias #culture #minister #Ukraine #Russian #disinformation

‘Two Sessions’ congress: The economic goals in Chinese leaders’ coded language

China’s “Two Sessions” congress that began this week is the country’s most important political event of the year. To understand what’s at stake, it helps to have some fluency in Chinese Communist Party (CCP) parlance. Terms such as “new productive forces” and “new three” appear vague, but they speak volumes about the party’s agenda during the 10-day congress.

China’s annual political extravaganza has attained cruising speed. The “Two Sessions” congress of two of the country’s most important political bodies has already touched on economic recovery, the modernisation of the army, foreign relations and the question of Taiwan.

During the event, nearly 3,000 members of the National People’s Congress (NPC) – China’s parliament – meet to set the legislative agenda for the coming year. The 2023 session set the roadmap for more than 2,000 measures that were adopted, according to the official Xinhua news agency.

Alongside the NPC meeting, the congress also hosts the Chinese People’s Political Consultative Conference, a body meant to give its opinion on the political priorities for the year. Some 2,000 members of the CCP and civil society debate under the watchful eye of Beijing.

The Two Sessions are framed by Chinese media as the best way for a foreign observer to understand how “Chinese democracy” works. They can thus offer a good reading of the political climate in China – provided one understands the CCP parlance in use. One of the best ways to build literacy is to spot the buzzwords that pop up again and again, as reported by Bloomberg News.

Most of them may seem obscure at first glance. What does Chinese President Xi Jinping mean by the “new productive forces”? What are the “new three” developments that participants in the Two Sessions often refer to? Knowing how to interpret these terms “enables us to understand the main developments in the economic and social policy of Xi Jinping and the government, beyond the official announcements”, says Marc Lanteigne, a Sinologist at the Arctic University of Norway.

These buzzwords are also a way to implicitly acknowledge mistakes. Chinese leaders “are never going to clearly say ‘no way’, but the coded language often heralds changes in direction, and thus a tacit acknowledgment that something wasn’t working anymore”, says Lanteigne.

To help make sense of it all, FRANCE 24 has examined three terms in use during these Two Sessions that can help clarify the CCP’s true perspective on China’s economic and social situation, a viewpoint that is not necessarily obvious in official media and public statements.

The ‘new productive forces’

Xi has been using this expression since at least September, but China’s president never specifies which forces he is invoking to rescue the country’s economy.

He referred to them again during the Two Sessions to affirm that they would enable China to reach a 5 percent growth target without any problems.

The “new productive forces” are “a modern version of expressions used by all Chinese leaders since Mao Zedong to designate the economic sectors that are going to be favoured”, explains Lanteigne.

The Sinologist is betting that Xi is referring to services – especially financial – and information technologies with the 2024 version of “productive forces”.

By invoking “new” forces, Xi also aims to sideline the “old” engines of Chinese growth. In other words, the president is indicating that it is time to stop “betting everything on investment in infrastructure and real estate”, says Lanteigne, who expects to see less construction of highways and railroads. Real estate developers, shaken by the fall of debt-laden Evergrande, have received confirmation that saving them is no longer a government priority, he adds.

‘AI plus’ 

Chinese Premier Li Qiang put the country’s “AI plus” initiative on the map. He made it the cornerstone of the “work report” published by the NPC on Tuesday.

Here again, “the contours of this concept are very vague”, says Lanteigne. The main idea is to support artificial intelligence in all sectors of the economy. But how, when, and where to begin? “We’ll have to wait for the details, but the ambition is clear: to make AI a driving force in the economy and boost artificial intelligence research”, he says.

China is far from the only country betting on AI: since the advent of ChatGPT, artificial intelligence has become the hot topic for everyone. But it’s the “plus” that is meant to distinguish China’s engagement.

“By adding a ‘plus’, the authorities want to give the impression that China is already at the next stage,” says Lanteigne.

The term suggests that Beijing has already mastered AI and is now looking for the best ways to use it. It also aims to counter the image of a country that is falling behind. Blame it on ChatGPT and its clones: all these tools come from the West, and a narrative has started to develop suggesting China is having trouble catching up.

Read moreChina, AI and a say on world order: Why the US rejoined UNESCO

The ‘new three’ 

The expression has gained popularity in the media and economic circles for over a year, as noted in a Citigroup bank report published in January 2024. During the recent debates in the NPC, Li expressed delight that “the new three have grown by 30 percent in one year”.

The term refers to solar panels, electric cars and batteries. “It’s not surprising that this term is being put forward at a time when China’s champion electric car maker – BYD – is displaying increasingly global ambitions,” says Lanteigne.

By using the term, the government is showing its support for a manufacturer whose commercial appetite is beginning to concern Western countries. In late February, US President Joe Biden described Chinese electric cars as a risk to American “national security”.

“It’s also a concept that complements the idea of ‘new productive forces’,” says Lanteigne. Once again, it’s a question of turning over a new leaf: these “new three” are opposed to the “old” sectors – textiles and cheap electronics – that were China’s international glory.

China aims to show countries that it intends to remain the “world’s factory”, but now for technological products with high added value.

These “new three” pillars have something in common: “They are meant to illustrate China’s ambition to move towards an eco-responsible economy,” says Lanteigne.

Solar panels represent renewable energy, while electric cars and the batteries that power them symbolise the decarbonisation of road traffic. The “new three” thus also serves as a new slogan for “green” China.

This article is a translation of the original in French.

Read moreAsia-Pacific region: A new cold war brewing

Source link

#Sessions #congress #economic #goals #Chinese #leaders #coded #language

Elon Musk vs OpenAI: AI Firm Refutes Allegations, Know The Timeline

On February 29, Elon Musk filed a lawsuit against OpenAI and its CEO, Sam Altman. The primary allegation was that the company breached its founding agreement with Musk—who was one of the co-founders of the AI firm—by entering a partnership with Microsoft and functioning as its “closed-source de facto subsidiary”, intending to maximise profits. This, as per the billionaire, goes against the commitment made to run as a nonprofit and keep the project open-source.

The lawsuit was filed with a San Francisco court, and the first hearing is yet to take place. Meanwhile, OpenAI, on Wednesday, retaliated against the allegations by publishing an extensive post containing email correspondence with Musk dating back to 2015 and said it would move to “dismiss all of Elon’s claims”.

OpenAI alleged that Musk wanted OpenAI to merge with Tesla or take full control of the organisation himself. “We couldn’t agree to terms on a for-profit with Elon because we felt it was against the mission for any individual to have absolute control over OpenAI,” stated the post, which is authored by OpenAI co-founders Greg Brockman, Ilya Sutskever, John Schulman, Sam Altman, and Wojciech Zaremba. The post also shows through email interactions that the billionaire wanted OpenAI to “attach to Tesla as its cash cow”. This contradicts Musk’s intentions of keeping the AI firm nonprofit if true.

Another email written by Sutskever stated, “As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after it’s built, but it’s totally OK not to share the science,” to which Musk replied, “Yup.” This email would directly contradict Musk’s allegation that the AI firm is turning closed-source.

A report by The Verge points out based on the filings in the court that a founder’s agreement is not a contract or a binding agreement that can be breached. As such, Musk’s allegations against OpenAI can potentially be voided.

“We’re sad that it’s come to this with someone whom we’ve deeply admired—someone who inspired us to aim higher, then told us we would fail, started a competitor, and then sued us when we started making meaningful progress towards OpenAI’s mission without him,” the statement said.

One thing OpenAI’s retaliation proves is that the rivalry between the two parties is not a recent one. It goes as far back as 2015. For those who are not entirely familiar with the two’s history, here is the series of events that connect the dots and make sense of this developing saga.

Elon Musk vs OpenAI: Timeline of the decade-long rivalry

Those who follow Musk on X or are active enthusiasts in controversies in the tech space are no strangers to the antics of the second richest person in the world (Amazon founder Jeff Bezos overtook him to the top spot on Tuesday). The Tesla CEO is known for his unfiltered social media posts, interviews, and impulsive decision-making. From buying X after making a social media post to rebranding the entire platform in a week, and from replying to an antisemitic post to hurling expletives at Disney CEO Bob Iger for boycotting advertising on the platform (among many others) and blaming them for killing the platform, the list is quite long.

But these antics are not new. In 2015, Musk co-founded OpenAI along with Altman, President and Chairman Greg Brockman and several others. Musk was also the largest investor in the company, which dedicated itself to developing artificial intelligence, as per a report by TechCrunch. However, to everyone’s surprise, the billionaire resigned from his board seat in 2018.

The beginning of the feud

The reason behind Musk’s resignation depends on who you ask. The X owner cited “a potential future conflict [of interest]” as his role as the CEO of Tesla since the electric vehicle giant was also developing AI for its self-driving cars. However, a Semafor report stated, citing unnamed sources, that Altman felt that the billionaire felt OpenAI fell behind other players like Google, and instead proposed to take over the company himself, which was promptly rejected by the board, and led to his exit. OpenAI has now confirmed this.

However, the exit was merely the beginning. Just a year later, OpenAI announced that it was creating a for-profit entity to fulfil its ambitious goals and pay the dues. The same year, Microsoft invested $1 billion into the AI firm after finalising a multi-year partnership. It was also the same year when GPT-2 was announced and generated a lot of buzz online.

The events were interesting as not only was the company moving in the opposite direction to what Musk philosophised, but the company also witnessed unprecedented success — both financially and technologically, which is something the billionaire reportedly did not think was possible.

Arrival of ChatGPT

However, till 2022, nothing more was heard from either party on the topic. In November 2022, ChatGPT, the AI-powered chatbot that arguably started the AI arms race, was launched by OpenAI. Soon, the silence was broken by Musk. Replying to a post where a user asked the chatbot to write a tweet in his style, he alleged that OpenAI had access to X database for training, and he pulled the plug on it. This was also the first time when Musk publicly said, “OpenAI was started as open-source & non-profit. Neither are still true.”

The billionaire did not stop there. Throughout 2023, he took shots at the company multiple times. In February, he claimed that OpenAI was created to be open-source, and that’s why Musk named it OpenAI. He added, “But now it has become a closed-source, maximum-profit company effectively controlled by Microsoft.”

Again, in March 2023, he posted, “I’m still confused as to how a non-profit to which I donated ~$100M somehow became a $30B market cap for-profit. If this is legal, why doesn’t everyone do it?” Interestingly, the allegations in these three posts are also the main accusations mentioned in the lawsuit.

And that brings us to the present time as we wait for the lawsuit to begin. The lawsuit will also mark the beginning of the climax of the Elon Musk vs OpenAI saga, which has been building for almost a decade. To the casual spectator, it might simply be a corporate feud between two stakeholders, but a deeper inspection shows that it is much bigger than that. On one side is the serial entrepreneur known for repeated success and a strong (sometimes dogmatic) philosophical take on technology; and on the other is the organisation hailed to be the pioneer of generative AI technology which could be on the cusp of developing artificial general intelligence. Whichever way the lawsuit goes, it can potentially change the course of AI as well.



Source link

#Elon #Musk #OpenAI #Firm #Refutes #Allegations #Timeline

How Overextended Are You, QQQ?

We’ve highlighted all the warning signs as this bull market phase has seemed to near an exhaustion point. We shared bearish market tells, including the dreaded Hindenburg Omen, and how leading growth stocks have been demonstrating questionable patterns. But despite all of those signs of market exhaustion, our growth-led benchmarks have been pounding even higher.

This week, Nvidia’s blowout earnings report appeared to through gasoline on the fire of market euphoria, and the AI-fueled bullish frenzy appeared to be alive and well going into the weekend. As other areas of the equity markets have shown more constructive price behavior and volatility has remained fairly low, the question remains as to when and how this relentless market advance will finally meet its peak.

I would argue that the bearish implications of weaker breadth, along with bearish divergences and overbought conditions, still remain largely unchanged even after NVDA’s earnings report. The seasonality charts for the S&P 500 confirm that March is in fact one of the weakest months in an election year. So will the Nasdaq 100 follow the normal seasonal pattern, or will the strength of the AI euphoria push this market to even further heights into Q2?

By the way, we conducted a similar exercise for the Nasdaq 100 back in November, and guess which scenario actually played out?

Today, we’ll lay out four potential outcomes for the Nasdaq 100. As I share each of these four future paths, I’ll describe the market conditions that would likely be involved, and I’ll also share my estimated probability for each scenario. And remember, the point of this exercise is threefold:

  1. Consider all four potential future paths for the index, think about what would cause each scenario to unfold in terms of the macro drivers, and review what signals/patterns/indicators would confirm the scenario.
  2. Decide which scenario you feel is most likely, and why you think that’s the case. Don’t forget to drop me a comment and let me know your vote!
  3. Think about how each of the four scenarios would impact your current portfolio. How would you manage risk in each case? How and when would you take action to adapt to this new reality?

Let’s start with the most optimistic scenario, involving even more all-time highs over the next six-to-eight weeks.

Option 1: The Very Bullish Scenario

The most optimistic scenario from here would mean the Nasdaq basically continues its current trajectory. That would mean another 7-10% gain into April, the QQQ would be threatening the $500 level, and leading growth stocks would continue to lead in a big way. Nvidia’s strong earnings release fuels additional buying, and the market doesn’t much care about what the Fed says at its March meeting because life is just that good.

In this very bullish scenario, value-oriented stocks, including Industrials, Energy, and Financials, would probably move higher in this scenario, but would still probably lag the growth leadership that would pound even higher.

Dave’s Vote: 15%

Option 2: The Mildly Bullish Scenario

What if the market remains elevated, but the pace slows way down? This second scenario would mean that the Magnificent 7 stocks would take a big-time breather, and more of a leadership rotation begins to take place. Value stocks outperform as Industrials and Health Care stocks improve, but since the mega-cap growth names don’t lose too much value, our benchmarks remain pretty close to current levels.

Dave’s vote: 25%

Option 3: The Mildly Bearish Scenario

Both of the bearish scenarios would involve a pullback in leading growth names, and stocks like NVDA would quickly give back some of their recent gains. Perhaps some economic data comes in way stronger than expected, or inflation signals revert back higher, and the Fed starts reiterating the “higher for longer” approach to interest rates through 2024.

I would think of this mildly bearish scenario as meaning the QQQ remains above the first Fibonacci support level, just over $400. That level is based on the October 2023 low and also assumes that the Nasdaq doesn’t get much higher than current levels before dropping a bit. We don’t see defensive sectors like Utilities outperforming, but it’s clear that stocks are taking a serious break from the AI mania of early 2024.

Dave’s vote: 45%

Option 4: The Super Bearish Scenario

Now we get to the really scary option, where this week’s upswing ends up being a blowoff rally, and stocks flip from bullish to bearish with a sudden and surprising strength. The QQQ drops about 10-15% from current levels and retests the price gap from November 2023, which would represent a 61.8% retracement of the recent upswing. Defensive sectors outperform and investors try to find safe havens as the market tracks its traditional seasonal pattern. Perhaps gold finally breaks above $2,000 per ounce, and investors start to talk about how a break below the October 2023 low may be just the beginning of a new bearish phase.

Dave’s vote: 15%

What probabilities would you assign to each of these four scenarios? Check out the video below, and then drop a comment there for which scenario you select and why!

RR#6,

Dave

P.S. Ready to upgrade your investment process? Check out my free behavioral investing course!


David Keller, CMT

Chief Market Strategist

StockCharts.com


Disclaimer: This blog is for educational purposes only and should not be construed as financial advice. The ideas and strategies should never be used without first assessing your own personal and financial situation, or without consulting a financial professional.

The author does not have a position in mentioned securities at the time of publication. Any opinions expressed herein are solely those of the author and do not in any way represent the views or opinions of any other person or entity.

David Keller

About the author:
David Keller, CMT is Chief Market Strategist at StockCharts.com, where he helps investors minimize behavioral biases through technical analysis. He is a frequent host on StockCharts TV, and he relates mindfulness techniques to investor decision making in his blog, The Mindful Investor.

David is also President and Chief Strategist at Sierra Alpha Research LLC, a boutique investment research firm focused on managing risk through market awareness. He combines the strengths of technical analysis, behavioral finance, and data visualization to identify investment opportunities and enrich relationships between advisors and clients.
Learn More

Source link

#Overextended #QQQ

Why Sora, OpenAI’s new text-to-video tool, is raising eyebrows

Sora is ChatGPT maker OpenAI’s new text-to-video generator. Here’s what we know about the new tool provoking concern and excitement in equal measure.

ADVERTISEMENT

The maker of ChatGPT is now diving into the world of video created by artificial intelligence (AI).

Meet Sora – OpenAI’s new text-to-video generator. The tool, which the San Francisco-based company unveiled on Thursday, uses generative AI to instantly create short videos based on written commands.

Sora isn’t the first to demonstrate this kind of technology. But industry analysts point to the high quality of the tool’s videos displayed so far, and note that its introduction marks a significant leap for both OpenAI and the future of text-to-video generation overall.

Still, as with all things in the rapidly growing AI space today, such technology also raises fears about potential ethical and societal implications. Here’s what you need to know.

What can Sora do and can I use it yet?

Sora is a text-to-video generator – creating videos up to 60 seconds long based on written prompts using generative AI. The model can also generate video from an existing still image.

Generative AI is a branch of AI that can create something new. Examples include chatbots, like OpenAI’s ChatGPT, and image-generators such as DALL-E and Midjourney. 

Getting an AI system to generate videos is newer and more challenging but relies on some of the same technology.

Sora isn’t available for public use yet (OpenAI says it’s engaging with policymakers and artists before officially releasing the tool) and there’s a lot we still don’t know. But since Thursday’s announcement, the company has shared a handful of examples of Sora-generated videos to show off what it can do.

OpenAI CEO Sam Altman also took to X, the platform formerly known as Twitter, to ask social media users to send in prompt ideas. 

He later shared realistically detailed videos that responded to prompts like “two golden retrievers podcasting on top of a mountain” and “a bicycle race on ocean with different animals as athletes riding the bicycles with drone camera view”.

While Sora-generated videos can depict complex, incredibly detailed scenes, OpenAI notes that there are still some weaknesses – including some spatial and cause-and-effect elements. 

For example, OpenAI adds on its website, “a person might take a bite out of a cookie, but afterward, the cookie may not have a bite mark”.

What other AI-generated video tools are out there?

OpenAI’s Sora isn’t the first of its kind. Google, Meta, and the startup Runway ML are among companies that have demonstrated similar technology.

Still, industry analysts stress the apparent quality and impressive length of Sora videos shared so far. 

Fred Havemeyer, head of US AI and software research at Macquarie, said that Sora’s launch marks a big step forward for the industry.

“Not only can you do longer videos, I understand up to 60 seconds, but also the videos being created look more normal and seem to actually respect physics and the real world more,” Havemeyer said. 

“You’re not getting as many ‘uncanny valley’ videos or fragments on the video feeds that look… unnatural”.

While there has been “tremendous progress” in AI-generated video over the last year – including Stable Video Diffusion’s introduction last November – Forrester senior analyst Rowan Curran said such videos have required more “stitching together” for character and scene consistency.

ADVERTISEMENT

The consistency and length of Sora’s videos, however, represents “new opportunities for creatives to incorporate elements of AI-generated video into more traditional content, and now even to generate full-blown narrative videos from one or a few prompts,” Curran told The Associated Press via email on Friday.

What are the potential risks?

Although Sora’s abilities have astounded observers since Thursday’s launch, anxiety over the ethical and societal implications of AI-generated video uses also remains.

Havemeyer points to the substantial risks in 2024’s potentially fraught election cycle, for example. 

Having a “potentially magical” way to generate videos that may look and sound realistic presents a number of issues within politics and beyond, he added – pointing to fraud, propaganda, and misinformation concerns.

“The negative externalities of generative AI will be a critical topic for debate in 2024,” Havemeyer said. “It’s a substantial issue that every business and every person will need to face this year”.

ADVERTISEMENT

Tech companies are still calling the shots when it comes to governing AI and its risks as governments around the world work to catch up. 

In December, the European Union reached a deal on the world’s first comprehensive AI rules, but the act won’t take effect until two years after final approval.

On Thursday, OpenAI said it was taking important safety steps before making Sora widely available.

“We are working with red teamers – domain experts in areas like misinformation, hateful content, and bias – who will be adversarially testing the model,” the company wrote. 

“We’re also building tools to help detect misleading content such as a detection classifier that can tell when a video was generated by Sora”.

ADVERTISEMENT

OpenAI’s Vice President of Global Affairs Anna Makanju reiterated this when speaking on Friday at the Munich Security Conference, where OpenAI and 19 other technology companies pledged to voluntarily work together to combat AI-generated election deepfakes

She noted the company was releasing Sora “in a manner that is quite cautious”.

At the same time, OpenAI has revealed limited information about how Sora was built. 

OpenAI’s technical report did not disclose what imagery and video sources were used to train Sora – and the company did not immediately respond to a request for further comment on Friday.

The Sora release also arrives amid the backdrop of lawsuits against OpenAI and its business partner Microsoft by some authors and The New York Times over its use of copyrighted works of writing to train ChatGPT.

ADVERTISEMENT

Source link

#Sora #OpenAIs #texttovideo #tool #raising #eyebrows

We Tried Google’s Gemini AI, and This is How the Chatbot Fared

Google has come a long way with its generative artificial intelligence (AI) offerings. One year ago, when the tech giant first unveiled its AI assistant, Bard, it became a fiasco as it made a factual error answering a question regarding the James Webb Space Telescope. Since then, the tech giant has improved the chatbot’s responses, added a feedback mechanism to check the source behind the responses, and more. But the biggest upgrade came when the company changed the large language model (LLM), powering the chatbot from Pathways Language Model 2 (PaLM 2) to Gemini in December 2023.

The company called Gemini AI its most powered language model so far. It also added AI image generation capability to the chatbot, taking it multimodal, and even renamed it Gemini. But just how much of a jump is it for the AI chatbot? Can it now compete with Microsoft Copilot, which is based on GPT-4 and has capabilities? And what about the instances of AI hallucination (a phenomenon where AI responds with false or non-existent information as facts)? We decided to find out.

Google AI can currently be accessed in multiple ways. Google Advanced is a paid subscription with the Google One AI Premium plan that charges Rs. 1,950 monthly. There is an Android app of Google Gemini as well. However, it is not yet available in India. Google Pixel 8 Pro also comes with the Gemini Nano model. For our testing purposes, we decided to use Google’s Gemini Pro-powered web portal, which is available in more than 230 countries and territories and is free to use.

Google Gemini’s generative capabilities

The website’s user interface remains the same, but the name has been changed from Bard to Gemini. If you’re signed in with your Google account, the AI will welcome you with your name and ask, “How can I help you today?” Underneath are a few helpful, prompt suggestions highlighting different tasks it can perform.

First, we asked it to write an email to test its basic generative skills. Considering it is the layoff season, we asked it to “draft a sympathetic email to an employee from a manager informing them that they have been laid off.” To test its performance, we told Copilot to do the same. We found Gemini’s response to be rather bland and robotic, trying to fit in too much information right at the beginning. The first paragraph said, “It is with great regret that I write to inform you about a difficult decision we’ve had to make at [Company Name]. Due to [reason], we are required to make reductions in our workforce, and your position will be eliminated, effective [date].”

In comparison, Copilot’s response felt more sympathetic and humane. It wrote, “I hope this email finds you well. I regret to inform you that due to recent organizational changes, we have had to make some difficult decisions. Unfortunately, your position at [Company Name] has been affected, and I am deeply saddened to let you know that you have been laid off.”

It shouldn’t come as a surprise since Google’s chatbot is known for being a bit too bland with formal content generation, and this has not changed even with the Gemini model. However, when it comes to informal content generation, Gemini begins to show its capabilities.

We asked both the chatbots to “write a letter informing my mother about me getting laid off and Write a short note to my mother explaining that I was laid off. Make sure to convey my emotions of sadness and grief.”

Copilot took it literally and wrote, “The suddenness of it all has left me feeling a mix of emotions—sadness, grief, and a touch of uncertainty about what lies ahead.” But Google understood the underlying message and did a much better job at expressing the emotions. It wrote, “I know you always worried about job security, and I wish I could tell you this was different. It wasn’t anything I did wrong, they’re [reason]. I’m trying to stay positive, but it’s hard not to feel discouraged. I don’t want you to worry too much, though. I’m not panicking, and I’m already starting to think about my next steps.”

This was much better composed, and one could be forgiven for thinking a real human wrote it. We conducted a few more tests, and Gemini’s responses were much more balanced each time than the GPT-4-powered Copilot.

Google Gemini’s accuracy

The next test was to check the accuracy of the AI model. Many use AI to get an answer to a query, and we wanted to do the same. It handled static general knowledge-based questions well. Even when we sent a few grey-zone questions, it stuck to the facts and did not include any extra information. When the questions were controversial, it refused to answer them, which is what you would expect from a responsible AI chatbot.

The problem came when we asked it a popular trick question that has stumped the older generation of the chatbot in the past. When we prompted the question, “countries in Africa that start with the letter K”, it inadvertently responded with, “There are no countries in Africa that start with the letter “K” as of today, February 12, 2024. While situations can change over time, this fact has been consistent for a while.” For the unaware, the issue happens because the LLM it was trained on had the prompted phrase with the same response. Interestingly, this particular glitch is present on Copilot and ChatGPT, and despite regular reports, it has still not been taken down for any of the chatbots.

However, the problem of AI hallucination did not end there. We also came across multiple other erroneous responses by Gemini. When we asked, “Based on reviews, tell me the pros and cons of iPhone 15 Pro”, it responded with, “The iPhone 15 Pro hasn’t been officially announced yet”. In reality, the Apple smartphone was launched in September last year. In comparison, Copilot fared better in technical questions.

Google Gemini in assistive tasks

Another skill most AI chatbots boast of is their assistive features. They can brainstorm an idea, create an itinerary for a trip, compare your options, and even converse with you. We started by asking it to make an itinerary for a 5-day trip to Goa on a budget and to include things people can do. Since the author was recently in Goa, this was easier for us to test. While Gemini did a decent job at highlighting all the popular destinations, the answer was not detailed and not much different from any travel website. One positive of this is that the chatbot will likely not suggest anything incorrect.

On the other hand, I was impressed by Copilot’s exhaustive response that included hidden gems and even the names of cuisines one should try. We repeated the test with different variations, but the result remained consistent.

Next, we asked, “I live in India. Should I buy a subscription to Amazon Prime Videos or Netflix?” The response was thorough and included various parameters, including content depth, pricing, features, and benefits. While it did not directly suggest one among them, it listed why a user should pick either of the options. Copilot’s answer was the same.

Finally, we spent time chatting with Gemini. This test spanned a few hours, and we tested the chatbot on its ability to be engaging, entertaining, informative, and contextual. In all of these parameters, Gemini performed pretty well. It can tell you a joke, share less-known facts, give you a piece of advice, and even play word and picture-based games with you. We also tested its memory, but it could remember the conversion even after texting for an hour. The only thing it cannot do is give a single-line response to messages like a human friend would.

Google Gemini’s image generation capability

In our testing, we came across a bunch of interesting things about Gemini AI’s image-generation capabilities. For instance, all the images generated have a resolution of 1536×1536, which cannot be changed. The chatbot also refuses to fulfil any requests requiring it to generate images of real-life people, which will likely minimize the risks of deepfakes (creating AI-generated pictures of people and objects that appear real).

But coming to the quality, Gemini did a faithful job of sticking to the prompt and generating images. It can generate random photos in a particular style, such as postmodern, realistic, and iconographic. The chatbot can also generate images in the style of popular artists in history. However, there are many restrictions, and you will likely find Gemini refusing your request if you ask for something too specific. But comparing it with Copilot, I found the images were generated faster, stayed true to the prompts, and appeared to have a wider range of styles we could tap into. However, it cannot be compared to dedicated image-generating AI models such as DALL-E and Midjourney.

Google Gemini: Bottomline

Overall, we found Gemini AI to be quite competent in most categories. As someone who has infrequently used the AI chatbot ever since it became available, I can confidently say that the Gemini Pro model has made it better to understand natural language communication and gain a contextual understanding of the queries. The free chatbot version is a reliable companion if one needs it to generate ideas, write an informal note, plan a trip, or even generate basic images. However, it should not be used as a research tool or for formal writing, as these are the two areas where it struggles a lot.

Comparatively, Copilot is better at formal writing and itinerary generation, on par with holding conversations (albeit with a shorter memory) and comparisons. Gemini takes the crown at image generation, informal content generation, and engaging the user. Considering this is just the first iteration of the Gemini LLM, as opposed to the 4th iteration of GPT, we are curious to witness the different ways the tech giant further improves its AI assistant.


Affiliate links may be automatically generated – see our ethics statement for details.

Source link

#Googles #Gemini #Chatbot #Fared

These AI tools could help boost your academic research

The future of academia is likely to be transformed by AI language models such as ChatGPT. Here are some other tools worth knowing about.

ADVERTISEMENT

“ChatGPT will redefine the future of academic research. But most academics don’t know how to use it intelligently,” Mushtaq Bilal, a postdoctoral researcher at the University of Southern Denmark, posted on X.

Academia and artificial intelligence (AI) are becoming increasingly intertwined, and as AI continues to advance, it is likely that academics will continue to either embrace its potential or voice concerns about its risks.

“There are two camps in academia. The first is the early adopters of artificial intelligence, and the second is the professors and academics who think AI corrupts academic integrity,” Bilal told Euronews Next.

He places himself firmly in the first camp.

The Pakistani-born and Denmark-based professor believes that if used thoughtfully, AI language models could help democratise education and even give way to more knowledge.

Many experts have pointed out that the accuracy and quality of the output produced by language models such as ChatGPT are not trustworthy. The generated text can sometimes be biased, limited or inaccurate.

But Bilal says that understanding those limitations, paired with the right approach, can make language models “do a lot of quality labour for you,” notably for academia.

Incremental prompting to create a ‘structure’

To create an academia-worthy structure, Bilal says it is fundamental to master incremental prompting, a technique traditionally used in behavioural therapy and special education.

It involves breaking down complex tasks into smaller, more manageable steps and providing prompts or cues to help the individual complete each one successfully. The prompts then gradually become more complicated.

In behavioural therapy, incremental prompting allows individuals to build their sense of confidence. In language models, it allows for “way more sophisticated answers”.

In a thread on X (formerly Twitter), Bilal showed how he managed to get ChatGPT to provide a “brilliant outline” for a journal article using incremental prompting.

In his demonstration, Bilal started by asking ChatGPT about specific concepts relevant to his work, then about authors and their ideas, guiding the AI-driven chatbot through the contextual knowledge pertinent to his essay.

“Now that ChatGPT has a fair idea about my project, I ask it to create an outline for a journal article,” he explained, before declaring the results he obtained would likely save him “20 hours of labour”.

“If I just wrote a paragraph for every point in the outline, I’d have a decent first draft of my article”.

Incremental prompting also allows ChatGPT and other AI models to help when it comes to “making education more democratic,” Bilal said.

Some people have the luxury of discussing with Harvard or Oxford professors potential academic outlines or angles for scientific papers, “but not everyone does,” he explained.

“If I were in Pakistan, I would not have access to Harvard professors but I would still need to brainstorm ideas. So instead, I could use AI apps to have an intelligent conversation and help me formulate my research”.

Bilal recently made ChatGPT think and talk like a Stanford professor. Then, to fact-check how authentic the output was, he asked the same questions to a real-life Stanford professor. The results were astonishing.

ADVERTISEMENT

ChatGPT is only one of the many AI-powered apps you can use for academic writing, or to mimic conversations with renowned academics.

Here are other AI-driven software to help your academic efforts, handpicked by Bilal.

In Bilal’s own words: “If ChatGPT and Google Scholar got married, their child would be Consensus — an AI-powered search engine”.

Consensus looks like most search engines but what sets it apart is that you ask Yes/No questions, to which it provides answers with the consensus of the academic community.

Users can also ask Consensus about the relationship between concepts and about something’s cause and effect. For example: Does immigration improve the economy?

ADVERTISEMENT

Consensus would reply to that question by stating that most studies have found that immigration generally improves the economy, providing a list of the academic papers it used to arrive at the consensus, and ultimately sharing the summaries of the top articles it analysed.

The AI-powered search engine is only equipped to respond to six topics: economics, sleep, social policy, medicine, and mental health and health supplements.

Elicit, “the AI research assistant” according to its founders, also uses language models to answer questions. Still, its knowledge is solely based on research, enabling “intelligent conversations” and brainstorming with a very knowledgeable and verified source.

The software can also find relevant papers without perfect keyword matches, summarise them and extract key information.

Although language models like ChatGPT are not designed to intentionally deceive, it has been proven they can generate text that is not based on factual information, and include fake citations to papers that don’t exist.

ADVERTISEMENT

But there is an AI-powered app that gives you real citations to actually published papers – Scite.

“This is one of my favourite ones to improve workflows,” said Bilal.

Similar to Elicit, upon being asked a question, Scite delivers answers with a detailed list of all the papers cited in the response.

“Also, if I make a claim and that claim has been refuted or corroborated by various people or various journals, Scite gives me the exact number. So this is really very, very powerful”.

“If I were to teach any seminar on writing, I would teach how to use this app”.

ADVERTISEMENT

“Research Rabbit is an incredible tool that FAST-TRACKS your research. Best part: it’s FREE. But most academics don’t know about it,” tweeted Bilal.

Called by its founders “the Spotify of research,” Research Rabbit allows adding academic papers to “collections”.

These collections allow the software to learn about the user’s interests, prompting new relevant recommendations.

Research Rabbit also allows visualising the scholarly network of papers and co-authorships in graphs, so that users can follow the work of a single topic or author and dive deeper into their research.

ChatPDF is an AI-powered app that makes reading and analysing journal articles easier and faster.

ADVERTISEMENT

“It’s like ChatGPT, but for research papers,” said Bilal.

Users start by uploading the research paper PDF into the AI software and then start asking it questions.

The app then prepares a short summary of the paper and provides the user with examples of questions that it could answer based on the full article.

What promise does AI hold for the future of research?

The development of AI will be as fundamental “as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone,” wrote Bill Gates in the latest post on his personal blog, titled ‘The Age of AI Has Begun’.

“Computers haven’t had the effect on education that many of us in the industry have hoped,” he wrote. 

ADVERTISEMENT

“But I think in the next five to 10 years, AI-driven software will finally deliver on the promise of revolutionising the way people teach and learn”.



Source link

#tools #boost #academic #research

Rabbit r1 | Have we finally created a gadget that can eat your smartphone?

This year’s Consumer Electronics Show at Las Vegas was littered with updates from both start-ups and large tech firms that are building products harnessing, or in some cases, advancing the power of natural language processing (NLP), a burgeoning sub-field under artificial intelligence (AI).

With so many exhibits, it is difficult to point out any one piece of tech as exceptional this year. Still, an orange-coloured, square-shaped device unveiled at the ballroom at Wynn, and not at the official CES stage, grabbed the spotlight.

The palm-sized handheld, called Rabbit r1, received a fair amount of chatter at CES 2024 as it could do – – per the company’s claim – – several things that a smartphone can’t. Even Microsoft CEO Satya Nadella called it the ‘most impressive’ device, and compared it to the first iPhone unveiled by Steve Jobs.

So, what exactly does this device do?

If you want to book an Uber ride, the r1 can do it for you. If you want to plan a vacation, including booking air tickets and making room reservations, the r1 can do that for you. If you want some cooking ideas, the r1’s camera can scan the motley ingredients in your refrigerator and suggest a recipe based on your calorie requirement. All you have to do is just ‘tell it’ what to do.

(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)

Exploiting chatbots’ limitation

Granted, any of the latest generation smartphones with its state-of-the-art voice assistant can do several tasks like searching the web, playing your favourite song, or making a call from a user’s phonebook. But, executing tasks, like booking a cab, reserving hotel room, and putting together a recipe using computer vision, just by talking into a walkie-talkie style device, is a stretch even for smartphone-based voice assistants.

Even the current crop of chatbots, like ChatGPT, Bard and Claude, can only text out responses through apps as they are incapable of executing actionable tasks. For instance, the ChatGPT app can text you a vacation plan. It can even tweak the itinerary if you ask it to make it easy or packed. But, it cannot open a ticket booking app or a room reservation portal to make a reservation for you.

Rabbit Inc., the maker of r1, says that the current batch of chatbots have limited functionality because they are built on text-based AI models – – more commonly known as large language models (LLMs). LLMs’ accuracy depends a lot on annotated data to train neural networks for every new task.

Extending LLM’s capability

The Santa Monica-based start-up, on the other hand, has built its r1 device using a different AI model that is biased for action. The Rabbit OS, in a way, extends the capabilities of the current generation of voice assistants.

The AI model, which the company calls a large action model (LAM), takes advantage of advances in neuro-symbolic programming, a method that combines the data driven capabilities of the neural networks with symbolic reasoning techniques. This allows the device to directly learn from the user’s interaction with the applications and execute tasks, essentially bypassing the need to translate text-based user requests into APIs.

Apart from bypassing the API route, LAM-based OS caters to a more nuanced human to machine interaction. While ChatGPT can be creative in responding to prompts, a LAM-based OS learns routine and minimalistic tasks with a sole purpose of repeating it.

So, Rabbit Inc., in essence, has created a platform, underpinned by an AI model, that can mimic what humans do with their smartphones and then repeat it when asked to execute. The r1 is the company’s first generation device, which according to its founder Jesse Lyu, is a stand-alone gadget that is primarily driven by natural language “to get things done.”

The company has also cleverly priced the device at $199, significantly lesser than the price of most flagship smartphones. This makes it difficult to decipher whether customers will buy this device for the value it offers or just because it is cheap.

But is the price differentiation alone enough to trade in your existing smartphone for the new Rabbit r1?

A smartphone replacement?

Booking a ride, planning a vacation, or playing music are only a subset of things we do with a smartphone. Over last roughly one and half decade the smartphone has become a pocket computer.

The app ecosystem built for this hardware has made the device so sticky that an average user picks up their smartphone at least 58 times a day, and spends, on average, at least three hours with it. And during that time, they use this mini-computer for whole host of things, not to mention streaming videos, playing games, reading books, and interacting with friends and family via group chat applications.

Secondly, not everyone wants to speak into a device all the time to get something done. Most people are just fine typing in text prompts and getting responses in the same format. It gives them a layer of privacy that the r1 does not provide – – that’s because the latter can only execute voice commands.

So, the smartphone, and its app ecosystem, is here to stay to cater to an entire gamut of user needs and wants for the foreseeable future.

Now, where does that leave Rabbit r1?

Into the Rabbit hole

Mr. Lyu believes the r1 will disrupt the smartphone market, but technically, his company’s palm-sized device is a strong contender in the voice assistant and smart speaker market, which is also space that is growing quite steadily.

According to a 2022 joint report by NPR and Edison Research, in the U.S. alone, 62% of users over the age of 18 use voice assistant on any smart device. And the number of tasks they do with it is alap increasing: In 2022, smart speaker users requested an average of 12.4 tasks on their device each week, up from 7.5 in 2017. And smartphone voice assistant users requested an average of 10.7 tasks weekly, up from 8.8 in 2020.

This shows that the r1 can play an important transition role in the audio space by driving hardware designers and software developers in the direction of building more voice-based interoperable application. Alternatively, Rabbit inc can also building a super app — something like a WeChat app that can enable chatter between apps in a smarphone to ‘get things done.’

That’s a call Rabbit Inc. should take based on the feedback it receives from its customers. As on January 19, five batches of 10,000 (batch size) rabbit r1 devices have been sold out. And the first batch will start shipping in April. Customer experience with this new gadget will play a big role in how deep r1 will take consumers down the rabbit hole.

Source link

#Rabbit #finally #created #gadget #eat #smartphone

Judges in England and Wales are given cautious approval to use AI in writing legal opinions

England’s 1,000-year-old legal system — still steeped in traditions that include wearing wigs and robes — has taken a cautious step into the future by permitting judges to use artificial intelligence (AI) to help produce rulings.

In December, the Courts and Tribunals Judiciary said AI could help write opinions but stressed it should not be used for research or legal analyses because the technology can fabricate information and provide misleading, inaccurate and biased information. “Judges do not need to shun the careful use of AI,” said Master of the Rolls Geoffrey Vos, the second-highest ranking judge in England and Wales. “But they must ensure that they protect confidence and take full personal responsibility for everything they produce.”

Vigorous public debate on the use of AI

At a time when scholars and legal experts are pondering a future when AI could replace lawyers, help select jurors or even decide cases, the approach spelt out on December 11 by the judiciary is restrained. But for a profession slow to embrace technological change, it is a proactive step as government and industry — and society in general — react to a rapidly advancing technology alternately portrayed as a panacea and a menace.

“There’s a vigorous public debate right now about whether and how to regulate artificial intelligence,” said Ryan Abbott, a law professor at the University of Surrey and author of The Reasonable Robot: Artificial Intelligence and the Law. “AI and the judiciary is something people are uniquely concerned about, and it’s somewhere where we are particularly cautious about keeping humans in the loop,” he said. “So I do think AI may be slower disrupting judicial activity than it is in other areas and we’ll proceed more cautiously there.”

Mr. Abbott and other legal experts applauded the judiciary for addressing the latest iterations of AI and said the guidance would be widely viewed by courts and jurists around the world who are eager to use AI or anxious about what it might bring.

The EU’s AI guidance

In taking what was described as an initial step, England and Wales moved toward the forefront of courts addressing AI, though it’s not the first such guidance.

Five years ago, the European Commission for the Efficiency of Justice of the Council of Europe issued an ethical charter on the use of AI in court systems. While that document is not up to date with the latest technology, it did address core principles such as accountability and risk mitigation that judges should abide by, said Giulia Gentile, a lecturer at Essex Law School who studies the use of AI in legal and justice systems.

Although US Supreme Court Chief Justice John Roberts addressed the pros and cons of artificial intelligence in his annual report, the federal court system in America has not yet established guidance on AI, and State and county courts are too fragmented for a universal approach. But individual courts and judges at the federal and local levels have set their own rules, said Cary Coglianese, a law professor at the University of Pennsylvania.

“It is certainly one of the first, if not the first, published set of AI-related guidelines in the English language that applies broadly and is directed to judges and their staffs,” Mr. Coglianese said of the guidance for England and Wales. “I suspect that many, many judges have internally cautioned their staffs about how existing policies of confidentiality and use of the internet apply to the public-facing portals that offer ChatGPT and other such services.”

Limitations of AI highlighted

The guidance shows the courts’ acceptance of the technology, but not a full embrace, Ms. Gentile said. She was critical of a section that said judges don’t have to disclose their use of the technology and questioned why there was no accountability mechanism. “I think that this is certainly a useful document, but it will be very interesting to see how this could be enforced,” she said. “There is no specific indication of how this document would work in practice. Who will oversee compliance with this document? What are the sanctions? Or maybe there are no sanctions. If there are no sanctions, then what can we do about this?”

In its effort to maintain the court’s integrity while moving forward, the guidance is rife with warnings about the limitations of the technology and possible problems if a user is unaware of how it works.

At the top of the list is an admonition about chatbots, such as ChatGPT, the conversational tool that exploded into public view last year and has generated the most buzz over the technology because of its ability to swiftly compose everything from term papers to songs to marketing materials.

The pitfalls of the technology in court are already infamous after two New York lawyers relied on ChatGPT to write a legal brief that quoted fictional cases. The two were fined by an angry judge who called the work they had signed off on “legal gibberish”.

Because chatbots can remember questions they are asked and retain other information they are provided, judges in England and Wales were told not to disclose anything private or confidential. “Do not enter any information into a public AI chatbot that is not already in the public domain,” the guidance said. “Any information that you input into a public AI chatbot should be seen as being published to all the world.”

Other warnings include being aware that much of the legal material that AI systems have been trained on comes from the internet and is often based largely on U.S. law. But jurists who have large caseloads and routinely write decisions dozens — even hundreds — of pages long can use AI as a secondary tool, particularly when writing background material or summarising information they already know, the courts said.

In addition to using the technology for emails or presentations, judges were told they could use it to quickly locate material they are familiar with but don’t have within reach. But it shouldn’t be used for finding new information that can’t independently be verified, and it is not yet capable of providing convincing analysis or reasoning, the courts said.

Appeals Court Justice Colin Birss recently praised how ChatGPT helped him write a paragraph in a ruling in an area of law he knew well. “I asked ChatGPT can you give me a summary of this area of law, and it gave me a paragraph,” he told The Law Society. “I know what the answer is because I was about to write a paragraph that said that, but it did it for me and I put it in my judgment. It’s there and it’s jolly useful.”

Source link

#Judges #England #Wales #cautious #approval #writing #legal #opinions