The Hindu Morning Digest, March 09, 2024

Congress leader Rahul Gandhi.
| Photo Credit: ANI

Abducted Army officer rescued in Manipur’s Thoubal

Amid the ongoing ethnic conflict in Manipur, a serving Junior Commissioned Officer of the Indian Army, who was on Friday morning abducted from his home in Thoubal district was rescued the same evening after an hours-long search operation launched by security forces in the state, The Hindu has learnt.

Amid fears of AI misuse in upcoming poll, OpenAI executives met Election Commission officials in February

Representatives from OpenAI, the Artificial Intelligence firm that developed ChatGPT, met with officials from the Election Commission of India in February to ensure that its popular platform is not misused in the upcoming Lok Sabha election, and to find ways to collaborate with the ECI. 

Congress releases first list of 39 candidates; Rahul Gandhi to contest from Wayanad

Congress leader Rahul Gandhi will seek re-election from Wayanad Lok Sabha seat in Kerala, the party announced on March 8. His name was part of the party’s first list of 39 Lok Sabha candidates.

PM to inaugurate passenger terminals at 12 airports across India

Prime Minister Narendra Modi will inaugurate 15 new airport passenger buildings across the country between March 9 and 10 worth more than ₹9,800 crore.

Jaishankar meets Japan’s PM Kishida

External Affairs Minister S. Jaishankar on Friday called on Japanese Prime Minister Fumio Kishida and apprised him of the progress made by the two countries in the just-concluded Foreign Ministers Strategic Dialogue.

Bhutan PM Tobgay’s India visit to focus on bilateral pacts, development and connectivity projects

Bhutan’s Prime Minister Tshering Tobgay will arrive in Delhi next week, in his first visit abroad since he took over office in January this year, sources confirmed to The Hindu.

NIA chargesheets one more accused in terror graffiti case

The National Investigation Agency (NIA) has charge-sheeted one more accused in the Shivamogga IS conspiracy case related to the graffiti written in Mangaluru supporting banned terrorist outfits – the Islamic State (IS), Lashkar-e-Taiba (LeT), and the Taliban. The NIA has also invoked additional charges against two others in the case.

PM Modi reaches Assam amid anti-CAA mood

Prime Minister Narendra Modi reached Assam’s Kaziranga National Park and Tiger Reserve on Friday evening amid rising sentiments against the Citizenship (Amendment) Act.

Minicoy island to see deployment of BrahMos missiles in future as part of expansion

Radars, jetties, airfield and BrahMos supersonic cruise missiles – the Indian Navy’s newest base being established on Minicoy Island in Lakshadweep, INS Jatayu, will have all these and many more. The upgrade is part of a long-term capability development plan which officials and experts say will shore up India’s security footprint in the islands located very close to critical Sea Lanes of Communication (SLOC).

Centre tweaks Prime Minister’s Rooftop Solar ‘free electricity’ scheme

The Centre has tweaked the new ₹75,000-crore PM-Surya Ghar Muft Bijli Yojna (Prime Minister’s Rooftop Solar: Free Electricity Scheme). From an initial plan to fully subsidise the installation of 1-3 KW solar systems in one crore households via tie-ups with renewable energy service companies, the scheme will now only contribute up to 60% of the costs, The Hindu has learnt.

Indian diplomat met ‘Afghan authorities’ in Kabul, says MEA

A senior Indian diplomat has met with ‘Afghan authorities’ in Kabul, the Ministry of External Affairs (MEA) confirmed on Friday. The development came months after the embassy of Afghanistan here which was earlier run by officials with affiliation to the pre-Taliban government of the Islamic Republic of Afghanistan was shut down and the consular responsibilities were taken over by Afghan officials who are considered to be pro-Taliban.

Odisha Congress adopting ‘wait and watch’ strategy in view of BJP-BJD alliance talk

With reported disagreement over seat sharing between the Bharatiya Janata Party and the Biju Janata Dal delaying the announcement of a formal alliance that entered the final stage, the Odisha Congress seems to be adopting a ‘wait and watch’ strategy to capitalise on the situation to their maximum advantage.

Congress promises ‘Right to Apprenticeship’ for youth below 25

With the tagline “Pehli Naukri Pakki” (first job is assured), the “right to apprenticeship” is one of the marquees promises in the Congress’s election manifesto, putting the issue of unemployment at the centre of their campaign against the Narendra Modi government

Electoral bonds case | Five-judge Bench to hold special sitting on SBI plea for more time

A special sitting by a five-judge Bench headed by Chief Justice of India D.Y. Chandrachud is scheduled on March 11 to hear an application filed by the State Bank of India (SBI) seeking time till June 30 to share details of electoral bonds purchased anonymously and encashed by political parties since April 2019.

Gadgets found with Sikh extremists: Assam jail superintendent arrested

The Assam Police arrested the superintendent of Dibrugarh Central Jail on March 7 night over the seizure of electronic gadgets from the possession of 10 inmates belonging to a radical pro-Khalistan organisation.

Safety guide launched for journalists covering Lok Sabha elections

The Committee to Protect Journalists (CPJ), along with The Hindu, launcheda ‘Safety Guide for Journalists covering Indian elections 2024’ at an online event on March 8.

Centre warns against offers of jobs with Russian Army

Offers for support jobs with the Russian Army made by unverified agents are “fraught with danger and risk to life”, the External Affairs Ministry said on Friday, announcing that stern action has been initiated by the Central Bureau of Investigation against the agencies that conned Indian nationals into fighting for the Russian forces in the Russia-Ukraine conflict.

Biden vs Trump | What do Super Tuesday results mean for U.S. and India?

In this episode of Worldview, we discuss what will a rematch between Biden and Trump in the US presidential election mean for U.S. Foreign Policy, geopolitics and India

IND vs ENG fifth Test | Rohit and Gill’s tons, Padikkal and Sarfaraz’s fifties have England reeling

Relentless India punished England all day and left it staring at another defeat after just two days of the fifth and final Test here.

Source link

#Hindu #Morning #Digest #March

Elon Musk vs OpenAI: AI Firm Refutes Allegations, Know The Timeline

On February 29, Elon Musk filed a lawsuit against OpenAI and its CEO, Sam Altman. The primary allegation was that the company breached its founding agreement with Musk—who was one of the co-founders of the AI firm—by entering a partnership with Microsoft and functioning as its “closed-source de facto subsidiary”, intending to maximise profits. This, as per the billionaire, goes against the commitment made to run as a nonprofit and keep the project open-source.

The lawsuit was filed with a San Francisco court, and the first hearing is yet to take place. Meanwhile, OpenAI, on Wednesday, retaliated against the allegations by publishing an extensive post containing email correspondence with Musk dating back to 2015 and said it would move to “dismiss all of Elon’s claims”.

OpenAI alleged that Musk wanted OpenAI to merge with Tesla or take full control of the organisation himself. “We couldn’t agree to terms on a for-profit with Elon because we felt it was against the mission for any individual to have absolute control over OpenAI,” stated the post, which is authored by OpenAI co-founders Greg Brockman, Ilya Sutskever, John Schulman, Sam Altman, and Wojciech Zaremba. The post also shows through email interactions that the billionaire wanted OpenAI to “attach to Tesla as its cash cow”. This contradicts Musk’s intentions of keeping the AI firm nonprofit if true.

Another email written by Sutskever stated, “As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after it’s built, but it’s totally OK not to share the science,” to which Musk replied, “Yup.” This email would directly contradict Musk’s allegation that the AI firm is turning closed-source.

A report by The Verge points out based on the filings in the court that a founder’s agreement is not a contract or a binding agreement that can be breached. As such, Musk’s allegations against OpenAI can potentially be voided.

“We’re sad that it’s come to this with someone whom we’ve deeply admired—someone who inspired us to aim higher, then told us we would fail, started a competitor, and then sued us when we started making meaningful progress towards OpenAI’s mission without him,” the statement said.

One thing OpenAI’s retaliation proves is that the rivalry between the two parties is not a recent one. It goes as far back as 2015. For those who are not entirely familiar with the two’s history, here is the series of events that connect the dots and make sense of this developing saga.

Elon Musk vs OpenAI: Timeline of the decade-long rivalry

Those who follow Musk on X or are active enthusiasts in controversies in the tech space are no strangers to the antics of the second richest person in the world (Amazon founder Jeff Bezos overtook him to the top spot on Tuesday). The Tesla CEO is known for his unfiltered social media posts, interviews, and impulsive decision-making. From buying X after making a social media post to rebranding the entire platform in a week, and from replying to an antisemitic post to hurling expletives at Disney CEO Bob Iger for boycotting advertising on the platform (among many others) and blaming them for killing the platform, the list is quite long.

But these antics are not new. In 2015, Musk co-founded OpenAI along with Altman, President and Chairman Greg Brockman and several others. Musk was also the largest investor in the company, which dedicated itself to developing artificial intelligence, as per a report by TechCrunch. However, to everyone’s surprise, the billionaire resigned from his board seat in 2018.

The beginning of the feud

The reason behind Musk’s resignation depends on who you ask. The X owner cited “a potential future conflict [of interest]” as his role as the CEO of Tesla since the electric vehicle giant was also developing AI for its self-driving cars. However, a Semafor report stated, citing unnamed sources, that Altman felt that the billionaire felt OpenAI fell behind other players like Google, and instead proposed to take over the company himself, which was promptly rejected by the board, and led to his exit. OpenAI has now confirmed this.

However, the exit was merely the beginning. Just a year later, OpenAI announced that it was creating a for-profit entity to fulfil its ambitious goals and pay the dues. The same year, Microsoft invested $1 billion into the AI firm after finalising a multi-year partnership. It was also the same year when GPT-2 was announced and generated a lot of buzz online.

The events were interesting as not only was the company moving in the opposite direction to what Musk philosophised, but the company also witnessed unprecedented success — both financially and technologically, which is something the billionaire reportedly did not think was possible.

Arrival of ChatGPT

However, till 2022, nothing more was heard from either party on the topic. In November 2022, ChatGPT, the AI-powered chatbot that arguably started the AI arms race, was launched by OpenAI. Soon, the silence was broken by Musk. Replying to a post where a user asked the chatbot to write a tweet in his style, he alleged that OpenAI had access to X database for training, and he pulled the plug on it. This was also the first time when Musk publicly said, “OpenAI was started as open-source & non-profit. Neither are still true.”

The billionaire did not stop there. Throughout 2023, he took shots at the company multiple times. In February, he claimed that OpenAI was created to be open-source, and that’s why Musk named it OpenAI. He added, “But now it has become a closed-source, maximum-profit company effectively controlled by Microsoft.”

Again, in March 2023, he posted, “I’m still confused as to how a non-profit to which I donated ~$100M somehow became a $30B market cap for-profit. If this is legal, why doesn’t everyone do it?” Interestingly, the allegations in these three posts are also the main accusations mentioned in the lawsuit.

And that brings us to the present time as we wait for the lawsuit to begin. The lawsuit will also mark the beginning of the climax of the Elon Musk vs OpenAI saga, which has been building for almost a decade. To the casual spectator, it might simply be a corporate feud between two stakeholders, but a deeper inspection shows that it is much bigger than that. On one side is the serial entrepreneur known for repeated success and a strong (sometimes dogmatic) philosophical take on technology; and on the other is the organisation hailed to be the pioneer of generative AI technology which could be on the cusp of developing artificial general intelligence. Whichever way the lawsuit goes, it can potentially change the course of AI as well.

Source link

#Elon #Musk #OpenAI #Firm #Refutes #Allegations #Timeline

ChatGPT Fever Spreads to US Workplace as Firms Raise Concerns Over Leaks

Many workers across the US are turning to ChatGPT to help with basic tasks, a Reuters/Ipsos poll found, despite fears that have led employers such as Microsoft and Google to curb its use. Companies worldwide are considering how to best make use of ChatGPT, a chatbot program that uses generative AI to hold conversations with users and answer myriad prompts. Security firms and companies have raised concerns, however, that it could result in intellectual property and strategy leaks.

Anecdotal examples of people using ChatGPT to help with their day-to-day work including drafting emails, summarising documents, and doing preliminary research.

Some 28 percent of respondents to the online poll on artificial intelligence (AI) between July 11 and 17 said they regularly use ChatGPT at work, while only 22 percent said their employers explicitly allowed such external tools.

The Reuters/Ipsos poll of 2,625 adults across the United States had a credibility interval, a measure of precision, of about 2 percentage points.

Some 10 percent of those polled said their bosses explicitly banned external AI tools, while about 25 percent did not know if their company permitted the use of the technology.

ChatGPT became the fastest-growing app in history after its launch in November. It has created both excitement and alarm, bringing its developer OpenAI into conflict with regulators, particularly in Europe, where the company’s mass data-collecting has drawn criticism from privacy watchdogs.

Human reviewers from other companies may read any of the generated chats, and researchers found that similar artificial intelligence AI could reproduce data it absorbed during training, creating a potential risk for proprietary information.

“People do not understand how the data is used when they use generative AI services,” said Ben King, VP of customer trust at corporate security firm Okta.

“For businesses, this is critical, because users don’t have a contract with many AIs – because they are a free service – so corporates won’t have to run the risk through their usual assessment process,” King said.

OpenAI declined to comment when asked about the implications of individual employees using ChatGPT but highlighted a recent company blog post assuring corporate partners that their data would not be used to train the chatbot further unless they gave explicit permission.

When people use Google’s Bard it collects data such as text, location, and other usage information. The company allows users to delete past activity from their accounts and request that content fed into the AI be removed. Alphabet-owned Google declined to comment when asked for further detail.

Microsoft did not immediately respond to a request for comment.


A US-based employee of Tinder said workers at the dating app used ChatGPT for “harmless tasks” like writing emails even though the company does not officially allow it.

“It’s regular emails. Very non-consequential, like making funny calendar invites for team events, farewell emails when someone is leaving … We also use it for general research,” said the employee, who declined to be named because they were not authorized to speak with reporters.

The employee said Tinder has a “no ChatGPT rule” but that employees still use it in a “generic way that doesn’t reveal anything about us being at Tinder”.

Reuters was not able independently confirm how employees at Tinder were using ChatGPT. Tinder said it provided “regular guidance to employees on best security and data practices”.

In May, Samsung Electronics banned staff globally from using ChatGPT and similar AI tools after discovering an employee had uploaded sensitive code to the platform.

“We are reviewing measures to create a secure environment for generative AI usage that enhances employees’ productivity and efficiency,” Samsung said in a statement on August 3.

“However, until these measures are ready, we are temporarily restricting the use of generative AI through company devices.”

Reuters reported in June that Alphabet had cautioned employees about how they use chatbots including Google’s Bard, at the same time as it markets the program globally.

Google said although Bard can make undesired code suggestions, it helps programmers. It also said it aimed to be transparent about the limitations of its technology.


Some companies told Reuters they are embracing ChatGPT and similar platforms while keeping security in mind.

“We’ve started testing and learning about how AI can enhance operational effectiveness,” said a Coca-Cola spokesperson in Atlanta, Georgia, adding that data stays within its firewall.

“Internally, we recently launched our enterprise version of Coca-Cola ChatGPT for productivity,” the spokesperson said, adding that Coca-Cola plans to use AI to improve the effectiveness and productivity of its teams.

Tate & Lyle Chief Financial Officer Dawn Allen, meanwhile, told Reuters that the global ingredients maker was trialing ChatGPT, having “found a way to use it in a safe way”.

“We’ve got different teams deciding how they want to use it through a series of experiments. Should we use it in investor relations? Should we use it in knowledge management? How can we use it to carry out tasks more efficiently?”

Some employees say they cannot access the platform on their company computers at all.

“It’s completely banned on the office network like it doesn’t work,” said a Procter & Gamble employee, who wished to remain anonymous because they were not authorized to speak to the press.

P&G declined to comment. Reuters was not able independently to confirm whether employees at P&G were unable to use ChatGPT.

Paul Lewis, chief information security officer at cyber security firm Nominet, said firms were right to be wary.

“Everybody gets the benefit of that increased capability, but the information isn’t completely secure and it can be engineered out,” he said, citing “malicious prompts” that can be used to get AI chatbots to disclose information.

“A blanket ban isn’t warranted yet, but we need to tread carefully,” Lewis said. 

© Thomson Reuters 2023  

Affiliate links may be automatically generated – see our ethics statement for details.

Source link

#ChatGPT #Fever #Spreads #Workplace #Firms #Raise #Concerns #Leaks

Why we can’t open-source a solution to A.I.’s ethical issues

While open-source code has revolutionized the world of technology, recent developments like the rise of foundation models, accelerated investment into artificial intelligence, and escalating geopolitical A.I. arms races have forced the open-source community to confront the ethical issues surrounding open-source code. 

Potential intellectual property violations, the perpetuation of bias and discrimination, privacy and security risks, power dynamics, and governance issues within the community, as well as the environmental impact, are all ethical issues that need addressing.

These issues have kickstarted a debate on whether a move from an open-source movement to an ethical source one could be the solution. Many developers have advocated for licenses (much like the Hippocratic Licence) that put ethical restrictions on the use of open-source code. Others point to the potential role of government regulators as the solution.

When it comes to machine learning models, there are a lot of unknown unknowns. The developers of these models must now face the decision as to whether to open-source or not. But for developers to predict all the possible use cases for machine learning is as impossible as the mathematicians at Bletchley Park predicting all the potential use cases for computers. Most developers recognize that by making their code open, they lose control of how it’s used–and who uses it.

Licenses are increasingly heralded as a solution to this problem. However, not only does restricting the use of open-source code with additional licenses contradict the core principle of the open-source community that code should be accessible to everyone, but it could also damage the collaborative environment that’s been fundamental to the open-source community’s ability to speed up technological development.

There’s also a lot of doubt as to whether ethical licenses will actually reduce the risk of code being used for nefarious purposes. Many countries already have human rights laws, and individuals or organizations violating those laws should be prosecuted accordingly, regardless of the method or technology used to perpetrate the abuse. If such laws do not deter these violators, then it’s unlikely that a licensing agreement would have any impact on the course of their actions.

A.I. systems are complex, which makes enforcing their ethical use burdensome. The rapid advancement of A.I. technology also makes it challenging to keep up with developments and potential ethical implications. Additionally, a lack of transparency and accountability for A.I. systems can make it challenging to hold organizations responsible for ethical violations. Addressing these challenges requires ongoing collaboration, dialogue, and investment in ethical research and development to ensure that A.I. is used in a way that aligns with societal values and promotes the greater good.

The burden of ethical use should rest with those who use open-source code to build A.I. products as opposed to those who write the code. That’s why government regulation is key for ensuring the ethical use of A.I. Government regulation would enforce ethical use to be defined rigorously, and result in the creation of bureaucratic structures around evaluating A.I. systems.

Similar to how governments regulate and scrutinize medical products before approving them for public consumption, governments could also ensure that A.I. passes certain tests before it’s released to the public. The onus should be on governments, armed with ample resources to investigate these tools, to take responsibility for these tests, rather than on developers, who can instead focus on building more advanced A.I.

Any company, organization, or individual should have to provide clear details about the properties and broader impacts of the model, including the data used to train it and the code used to develop it. They should disclose any potentially harmful applications before making it available for use. This application approach is not so different from the application process used by many of the large academic conferences, which enforce a submission structure through which the applicant must transparently address the ethical issues and broader impact.

In general, evaluating models to the point where users can be certain that they operate ethically and correctly all the time remains a problem in the field of A.I. Even OpenAI has struggled to figure out how to do it with ChatGPT. The good news is that since there has been a lot of research into understanding the fairness, accountability, and transparency of A.I. models, governments already have a lot of tools from which to begin building regulatory frameworks for A.I. 

The European Union introduced a proposal for a new legal framework on A.I. in 2021, which aims to ensure that A.I. is developed and used in a way that aligns with EU values. The United States has established the National Artificial Intelligence Initiative Office (NAIIO) to coordinate federal investments in A.I. research and development. Canada has similarly established the Canadian Institute for Advanced Research (CIFAR) to fund interdisciplinary research on A.I. and to develop ethical and technical standards for A.I. Singapore has introduced the Model AI Governance Framework to provide guidance on the responsible development and use of A.I. 

By taking a multi-stakeholder approach and investing in ethical research and development, governments can ensure that A.I. is developed and used in a way that aligns with societal values and promotes the greater good.

We are moving into an era in which decisions are being made for and about individuals using algorithmic processes that do not have human involvement. Individuals have a right to an explanation about how these A.I. systems reached those decisions, and that explanation depends on having a transparent process. 

While the movement for ethical source licenses comes from a place of strong principles and positive intentions, their effects are limited due to factors such as lack of enforcement mechanisms, limited scope, lack of awareness, alternative licensing options, and lack of standardization, and their approach requires a reactive legal effort. Licensors would have to watch all the uses of their open-source code, which is impractical. Governments, on the other hand, have the opportunity to protect the public from the potential negative impact of A.I. systems–and they can do so through proactive regulation and enforcement. 

Frederik Hvilshøj is the machine learning lead at Encord.

The opinions expressed in commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

More must-read commentary published by Fortune:

Source link

#opensource #solution #AIs #ethical #issues

Most Americans are uncomfortable with artificial intelligence in health care, survey finds | CNN


Most Americans feel “significant discomfort” about the idea of their doctors using artificial intelligence to help manage their health, a new survey finds, but they generally acknowledge AI’s potential to reduce medical mistakes and to eliminate some of the problems doctors may have with racial bias.

Artificial intelligence is the theory and development of computer programs that can solve problems and perform tasks that typically would require human intelligence – machines that can essentially learn like humans can, based on the input they have been given.

You probably already use technology that relies on artificial intelligence every day without even thinking about it.

When you shop on Amazon, for example, it’s artificial intelligence that guides the site to recommend cat toys if you’ve previously shopped for cat food. AI can also help unlock your iPhone, drive your Tesla, answer customer service questions at your bank and recommend the next show to binge on Netflix.

Americans may like these individualized services, but when it comes to AI and their health care, it may be a digital step too far for many.

Sixty percent of Americans who took part in a new survey by the Pew Research Center said that they would be uncomfortable with a health care provider who relied on artificial intelligence to do something like diagnose their disease or recommend a treatment. About 57% said that the use of artificial intelligence would make their relationship with their provider worse.

Only 38% felt that using AI to diagnose disease or recommend treatment would lead to better health outcomes; 33% said it would lead to worse outcomes; and 27% said it wouldn’t make much of a difference.

About 6 in 10 Americans said they would not want AI-driven robots to perform parts of their surgery. Nor do they like the idea of a chatbot working with them on their mental health; 79% said they wouldn’t want AI involved in their mental health care. There’s also concern about security when it comes to AI and health care records.

“Awareness of AI is still developing. So one dynamic here is, the public isn’t deeply familiar with all of these technologies. And so when you consider their use in a context that’s very personal, something that’s kind of high-stakes as your own health, I think that the notion that folks are still getting to know this technology is certainly one dynamic at play,” said Alec Tyson, Pew’s associate director of research.

The findings, released Wednesday, are based on a survey of 11,004 US adults conducted from December 12-18 using the center’s American Trends Panel, an online survey group recruited through random sampling of residential addresses across the country. Pew weights the survey to reflect US demographics including race, gender, ethnicity, education and political party affiliation.

The respondents expressed concern over the speed of the adoption of AI in health and medicine. Americans generally would prefer that health care providers move with caution and carefully consider the consequences of AI adoption, Tyson said.

But they’re not totally anti-AI when it comes to health care. They’re comfortable with using it to detect skin cancer, for instance; 65% thought it could improve the accuracy of a diagnosis. Some dermatologists are already exploring the use of AI technology in skin cancer diagnosis, with some limited success.

Four in 10 Americans think AI could also help providers make fewer mistakes, which are a serious problem in health care. A 2022 study found that medical errors cost about $20 billion a year and result in about 100,000 deaths each year.

Some Americans also think AI may be able to build more equity into the health care system.

Studies have shown that most providers have some form of implicit bias, with more positive attitudes toward White patients and negative attitudes toward people of color, and that could affect their decision-making.

Among the survey participants who understand that this kind of bias exists, the predominant view was that AI could help when it came to diagnosing a disease or recommending treatments, making those decisions more data-driven.

Tyson said that when people were asked to describe in their own words how they thought AI would help fight bias, one participant cited class bias: They believed that, unlike a human provider, an AI program wouldn’t make assumptions about a person’s health based on the way they dressed for the appointment.

“So this is a sense that AI is more neutral or at least less biased than humans,” Tyson said. However, AI is developed with human input, so experts caution that it may not always be entirely without bias.

Pew’s earlier surveys about artificial intelligence have found a general openness to AI, he said, particularly when it’s used to augment, rather than replace, human decision-making.

“AI as just a piece of the process in helping a human make a judgment, there is a good amount of support for that,” Tyson said. “Less so for AI to be the final decision-maker.”

For years, radiologists have used AI to analyze x-rays and CT scans to look for cancer and improve diagnostic capacity. About 30% of radiologists use AI as a part of their practice, and that number is growing, a survey found – but more than 90% in that survey said they wouldn’t trust these tools for autonomous use.

Dr. Victor Tseng, a pulmonologist and medical director of California-based Ansible Health, said that his practice is one of many that have been exploring the AI program ChatGPT. His group has set up a committee to look into its uses and to discuss the ethics around using it so the practice could set up guardrails before putting it into clinical practice.

Tseng’s group published a study this month that showed that ChatGPT could correctly answer enough practice questions that it would have passed the US Medical Licensing Examination.

Tseng said he doesn’t believe that AI will ever replace doctors, but he thinks technology like ChatGPT could make the medical profession more accessible. For example, a doctor could ask ChatGPT to simplify complicated medical jargon so that someone with a seventh-grade education could understand.

“AI is here. The doors are open,” Tseng said.

The Pew survey findings suggest that attitudes could shift as more Americans become more familiar with artificial intelligence. Survey respondents who were more familiar with a technology were more supportive of it, but they still shared caution that doctors could move too quickly in adopting it.

“Whether you’ve heard a lot about AI, just a little or maybe even nothing at all, all of those segments of the public are really in the same space,” Tyson said. “They echo this sentiment of caution of wanting to move carefully in AI adoption in health care.”

Source link

#Americans #uncomfortable #artificial #intelligence #health #care #survey #finds #CNN

Chinese Firms Are Scrambling to Offer Homegrown ChatGPT Alternatives

Microsoft-backed OpenAI has kept its hit ChatGPT app off-limits to users in China, but the app is attracting huge interest in the country, with firms rushing to integrate the technology into their products and launch rival solutions.

While residents in the country are unable to create OpenAI accounts to access the artificial intelligence-powered (AI) chatbot, virtual private networks and foreign phone numbers are helping some bypass those restrictions.

At the same time, the OpenAI models behind the ChatGPT programme, which can write essays, recipes and complex computer code, are relatively accessible in China and increasingly being incorporated into Chinese consumer technology applications from social networks to online shopping.

The tool’s surging popularity is rapidly raising awareness in China about how advanced US AI is and, according to analysts, just how far behind tech firms in the world’s second-largest economy are as they scramble to catch up.

“There is huge excitement around ChatGPT. Unlike the metaverse which faces huge difficulty in finding real-life application, ChatGPT has suddenly helped us achieve human-computer interaction,” said Ding Daoshi, director of Beijing-based internet consultancy Sootoo. “The changes it will bring about are more immediate, more direct and way quicker.”

OpenAI or ChatGPT itself is not blocked by Chinese authorities but OpenAI does not allow users in mainland China, Hong Kong, Iran, Russia and parts of Africa to sign up.

OpenAI told Reuters it is working to make its services more widely available.

“While we would like to make our technology available everywhere, conditions in certain countries make it difficult or impossible for us to do so in a way that is consistent with our mission,” the San Francisco-based firm said in an emailed statement. “We are currently working to increase the number of locations where we can provide safe and beneficial access to our tools.”

In December, Tencent Holdings’ WeChat, China’s biggest messaging app, shut several ChatGPT-related programmes that had appeared on the network, according to local media reports, but they have continued to spring up.

Dozens of bots rigged to ChatGPT technology have emerged on WeChat, with hobbyists using it to make programmes or automated accounts that can interact with users. At least one account charges users a fee of CNY 9.99 ($1.47 or roughly Rs. 120) to ask 20 questions.

Tencent did not respond to Reuters’ request for comments.

ChatGPT supports Chinese language interaction and is highly capable of conversing in Chinese, which has helped drive its unofficial adoption in the country.

Chinese firms also use proxy tools or existing partnerships with Microsoft, which is investing billions of dollars in its OpenAI, to access tools that allow them to embed AI technology into their products.

Shenzhen-based Proximai in December introduced a virtual character into its 3D game-like social app who used ChatGPT’s underlying tech to converse. Beijing-based entertainment software company Kunlun Tech plans to incorporate ChatGPT in its web browser Opera.

SleekFlow, a Tiger Global-backed startup in Hong Kong, said it was integrating the AI into its customer relations messaging tools.

“We have clients all over the world,” Henson Tsai, SleekFlow’s founder said. “Among other things, ChatGPT does excellent translations, sometimes better than other solutions available on the market.”


Reuters’ tests of ChatGPT indicate that the chatbot is not averse to questions that would be sensitive in mainland China. Asked for its thoughts on Chinese President Xi Jinping, for instance, it responded it does not have personal opinions and presented a range of views.

But some of its proxy bots on WeChat have blacklisted such terms, according to other Reuters checks, complying with China’s heavy censorship of its cyberspace. When asked the same question about Xi on one ChatGPT proxy bot, it responded by saying that the conversation violated rules.

To comply with Chinese rules, Proximai’s founder Will Duan said his platform would filter information presented to users during their interaction with ChatGPT.

Chinese regulators, which last year introduced rules to strengthen governance of “deepfake” technology, have not commented on ChatGPT, however, state media this week warned about stock market risks amid a frenzy over local ChatGPT-concept stocks.

The Cyberspace Administration of China, the internet regulator, did not respond to Reuters’ request for comment.

“With the regulations released last year, the Chinese government is saying: we already see this technology coming and we want to be ahead of the curve,” said Rogier Creemers, an assistant professor at Leiden University.

“I fully expect the great majority of the AI-generated content to be non-political.”

Chinese rivals

Joining the buzz have been some of the country’s largest tech giants such as Baidu and Alibaba who gave updates this week on AI models they have been working on, prompting their shares to zoom.

Baidu said this week it would complete internal testing of its “Ernie Bot” in March, a big AI model the search firm has been working on since 2019.

On Wednesday, Alibaba said that its research institute Damo Academy was also testing a ChatGPT-style tool.

Duan, whose company has been using a Baidu AI chatbot named Plato for natural language processing, said ChatGPT was at least a generation more powerful than China’s current NLP solutions, though it was weaker in some areas, such as understanding conversation context.

Baidu did not reply to Reuters’ request for comments.

Access to OpenAI’s GPT-3, or Generative Pre-trained Transformer, was first launched in 2020, an update of which is the backbone of ChatGPT.

Duan said potential long-term compliance risks mean Chinese companies would most likely replace ChatGPT with a local alternative, if they could match the U.S.-developed product’s functionality.

“So we actually hope that there can be alternative solutions in China which we can directly use… it may handle Chinese even better, and it can also better comply with regulations,” he said.

© Thomson Reuters 2023

Samsung’s Galaxy S23 series of smartphones was launched earlier this week and the South Korean firm’s high-end handsets have seen a few upgrades across all three models. What about the increase in pricing? We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Source link

#Chinese #Firms #Scrambling #Offer #Homegrown #ChatGPT #Alternatives

Exclusive: Bill Gates On Advising OpenAI, Microsoft And Why AI Is ‘The Hottest Topic Of 2023’

The Microsoft cofounder talked to Forbes about his work with AI unicorn OpenAI and back on Microsoft’s campus, AI’s potential impact on jobs and in medicine, and much more.

In 2020, Bill Gates left the board of directors of Microsoft, the tech giant he cofounded in 1975. But he still spends about 10% of his time at its Redmond, Washington headquarters, meeting with product teams, he says. A big topic of discussion for those sessions: artificial intelligence, and the ways AI can change how we work — and how we use Microsoft software products to do it.

In the summer of 2022, Gates met with OpenAI cofounder and president Greg Brockman to review some of the generative AI products coming out of the startup unicorn, which recently announced a “multiyear, multibillion” dollar deepened partnership with Microsoft.

You can read more about OpenAI and the race to bring AI to work — including comments from Brockman, CEO Sam Altman and many other players — in our print feature here. Gates’ thoughts on AI, shared exclusively with Forbes, are below.

This interview has been edited for clarity and consistency

Alex Konrad: It looks like 2018 was the earliest I saw you talking with excitement about what OpenAI was doing. Is that right, or where does your interest in the company begin?

Bill Gates: [My] interest in AI goes back to my very earliest days of learning about software. The idea of computers seeing, hearing and writing is the longterm quest of the entire industry. It’s always been super interesting to me. And so as these machine learning techniques started to work extremely well, particularly things for speech and image recognition I’ve been fascinated by how many more inventions we would need before [AI] is really intelligent, in the sense of passing tests and being able to write fluently.

I know Sam Altman well. And I got to know Greg [Brockman] through OpenAI and some of the other people there, like Ilya [Sutskever, Brockman’s cofounder and chief scientist]. And I was saying to them, “Hey, you know, I think it doesn’t reach an upper bound unless we more explicitly have a knowledge representation, and explicit forms of symbolic logic.” There have been a lot of people raising those questions, not just me. But they were able to convince me that there was significant emergent behavior as you scaled up these large language models, and they did some really innovative stuff with reinforcement learning on top of it. I’ve stayed in touch with them, and they’ve been great about demoing their stuff. And now over time, they’re doing some collaboration, particularly with the huge back-ends that these skills require, that’s really come through their partnership with Microsoft.

That must be gratifying for you personally, that your legacy is helping their legacy.

Yeah, it’s great for me because I love these types of things. Also, wearing my foundation hat [The Bill & Melinda Gates Foundation, which Gates talked more about in September], the idea that a math tutor that’s available to inner city students, or medical advice that’s available to people in Africa who during their life, generally wouldn’t ever get to see a doctor, that’s pretty fantastic. You know, we don’t have white collar worker capacity available for lots of worthy causes. I have to say, really in the last year, the progress [in AI] has gotten me quite excited.

Few people have seen as many technological changes, or major shifts, as close-up as you have. How would you compare AI to some of these historic moments in technology history?

I’d say, this is right up there. We’ve got the PC without a graphics interface. Then you have the PC with a graphics interface, which are things like Windows and Mac, and which for me really began as I spent time with Charles Simonyi at Xerox PARC. That demo was greatly impactful to me and kind of set an agenda for a lot of what was done in both Microsoft and in the industry thereafter. [Editor’s note: a Silicon Valley research group famous for work on tech from the desktop to GPUs and the Ethernet.]

Then of course, the internet takes that to a whole new level. When I was CEO of Microsoft, I wrote the internet “tidal wave” memo, It’s pretty stunning that what I’m seeing in AI just in the last 12 months is every bit as important as the PC, the PC with GUI [graphical user interface], or the internet. As the four most important milestones in digital technology, this ranks up there.

And I know OpenAI’s work better than others. I’m not saying they’re the only ones. In fact, you know, part of what’s amazing is that there’ll be a lot of entrants into this space. But what OpenAI has done is very, very impressive, and they certainly lead in many aspects of [AI], which people are seeing through the broad availability of ChatGPT.

How do you see this changing how people work or how they do business? Should they be excited about productivity? Should they be at all concerned about job loss? What should people know about what this will mean for how they work?

Most futurists who’ve looked at the coming of AI have said that repetitive blue collar and physical jobs would be the first jobs to be affected by AI. And that’s definitely happening, and people shouldn’t lower their guard to that, but it’s a little more slow than I would have expected. You know, Rodney Brooks [a professor emeritus at MIT and robotics entrepreneur] put out what I would call some overly conservative views of how quickly some of those things would happen. Autonomous driving has particular challenges, but factory robotization will still happen in the next five to 10 years. But what’s surprising is that tasks that involve reading and writing fluency — like summarizing a complex set of documents or writing something in the style of a pre-existing author — the fact that you can do that with these large language models, and reinforce them, that fluency is really quite amazing.

One of the things I challenged Greg [Brockman] with early in the summer: “Hey, can OpenAI’s model]] pass the AP Biology tests?” And I said, “If you show me that, then I will say that it has the ability to represent things in a deeply abstract form, that’s more than just statistical things.” When I was first programming, we did these random sentence generators where we’d have the syntax of typical English sentences, you know, noun, verb, object. Then we’d have a set of nouns, a set of verbs and a set of objects and we would just randomly pick them, and every once in a while, it would spit out something that was funny or semi-cogent. You’d go, “Oh my god.” That’s the ‘monkeys typing on keyboards’ type of thing.

Well, this is a relative of that. Take [the AI’s] ability to take something like an AP test question. When a human reads a biology textbook, what’s left over in your mind? We can’t really describe that at a neurological level. But in the summer, [OpenAI] showed me progress that I really was surprised to see. I thought we’d have to invent more explicit knowledge representation.

We had to train it to do Sudoku, and it would get it wrong and say, “Oh, I mistyped.” Well, of course you mistyped, what does that mean? You don’t have a keyboard, you don’t have fingers! But you’re “mistyping?” Wow.

Satya [Nadella, Microsoft’s CEO] is super nice about getting input from me on technological things. And I spend maybe 10% of my time meeting with Microsoft product groups about their product roadmaps. I enjoy that time, and it also helps me be super up-to-date for the work of the Foundation, which is in health, education and agriculture. And so it was a huge win to give feedback to OpenAI over the summer, too. (Now people are seeing most of what I saw; I’ve seen some things that are somewhat more up-to-date.) If you take this progression, the ability to help you write and to help you read is happening now, and it will just get better. And they’re not hitting a boundary, nor are their competitors.

So, okay, what does that mean in the legal world, or in the processing invoices world, or in the medical world? There’s been an immense amount of playing around with [ChatGPT] to try to drive those applications. Even things as fundamental as search.

[ChatGPT] is truly imperfect. Nobody suggests it doesn’t make mistakes, and it’s not very intuitive. And then, with something like math, it’ll just be completely wrong. Before it was trained, its self-confidence in a wrong answer was also mind blowing. We had to train it to do Sudoku, and it would get it wrong and say, “Oh, I mistyped.” Well, of course you mistyped, what does that mean? You don’t have a keyboard, you don’t have fingers! But you’re “mistyping?” Wow. But that’s what the corpus [of training text] had taught it.

Having spent time with Greg [Brockman] and Sam [Altman], what makes you confident that they are building this AI responsibly, and that people should trust them to be good stewards of this technology? Especially as we move closer to an AGI.

Well, OpenAI was founded with that in mind. They certainly aren’t a purely profit-driven organization, though they do want to have the resources to build big, big, big machines to take this stuff forward. And that will cost tens of billions of dollars, eventually, in hardware and training costs. But the near-term issue with AI is a productivity issue. It will make things more productive and that affects the job market. The long term-issue, which is not yet upon us, is what people worry about: the control issue. What if the humans who are controlling it take it in the wrong direction? If humans lose control, what does that mean? I believe those are valid debates.

These guys care about AI safety. They’d be the first to say that they haven’t solved it. Microsoft also brings a lot of sensibilities about these things as a partner as well. And look, AI is going to be debated. It’ll be the hottest topic of 2023, and that’s appropriate. It will change the job market somewhat. And it’ll make us really wonder, what are the boundaries? [For example] it’s not anywhere close to doing scientific invention. But given what we’re seeing, that’s within the realm of possibility five years from now or 10 years from now.

What is your favorite or most fun thing you’ve seen these tools create so far?

It’s so much fun to play around with these things. When you’re with a group of friends, and you want to write a poem about how much fun something has been. The fact that you can say okay, “write it like Shakespeare” and it does — that creativity has been fun to have. I’m always surprised that even though the reason I have access is for serious purposes, I often turn to [ChatGPT] just for fun things. And after I recite a poem it wrote, I have to admit that I could not have written that.


MORE FROM FORBESAfter Layoffs And A CEO Change, Cometeer’s Frozen Coffee Pod Business Is In Hot WaterMORE FROM FORBESEmerging VCs Struggle To Raise Funds As Nervous Investors Park Their Money In Big-Name FirmsMORE FROM FORBES‘Fake It ‘Til You Make It’: Meet Charlie Javice, The Startup Founder Who Fooled JP MorganMORE FROM FORBESHow Laurel Bowden Became One Of Europe’s Top Investors By Skipping The HypeMORE FROM FORBESDisruption Through Conflict, Catastrophe And Chance: Meet The 30 Under 30 In Enterprise Tech

Source link

#Exclusive #Bill #Gates #Advising #OpenAI #Microsoft #Hottest #Topic

Paging Dr. AI? What ChatGPT and artificial intelligence could mean for the future of medicine | CNN


Without cracking a single textbook, without spending a day in medical school, the co-author of a preprint study correctly answered enough practice questions that it would have passed the real US Medical Licensing Examination.

But the test-taker wasn’t a member of Mensa or a medical savant; it was the artificial intelligence ChatGPT.

The tool, which was created to answer user questions in a conversational manner, has generated so much buzz that doctors and scientists are trying to determine what its limitations are – and what it could do for health and medicine.

ChatGPT, or Chat Generative Pre-trained Transformer, is a natural language-processing tool driven by artificial intelligence.

The technology, created by San Francisco-based OpenAI and launched in November, is not like a well-spoken search engine. It isn’t even connected to the internet. Rather, a human programmer feeds it a vast amount of online data that’s kept on a server.

It can answer questions even if it has never seen a particular sequence of words before, because ChatGPT’s algorithm is trained to predict what word will come up in a sentence based on the context of what comes before it. It draws on knowledge stored on its server to generate its response.

ChatGPT can also answer followup questions, admit mistakes and reject inappropriate questions, the company says. It’s free to try while its makers are testing it.

Artificial intelligence programs have been around for a while, but this one generated so much interest that medical practices, professional associations and medical journals have created task forces to see how it might be useful and to understand what limitations and ethical concerns it may bring.

Dr. Victor Tseng’s practice, Ansible Health, has set up a task force on the issue. The pulmonologist is a medical director of the California-based group and a co-author of the study in which ChatGPT demonstrated that it could probably pass the medical licensing exam.

Tseng said his colleagues started playing around with ChatGPT last year and were intrigued when it accurately diagnosed pretend patients in hypothetical scenarios.

“We were just so impressed and truly flabbergasted by the eloquence and sort of fluidity of its response that we decided that we should actually bring this into our formal evaluation process and start testing it against the benchmark for medical knowledge,” he said.

That benchmark was the three-part test that US med school graduates have to pass to be licensed to practice medicine. It’s generally considered one of the toughest of any profession because it doesn’t ask straightforward questions with answers that can easily found on the internet.

The exam tests basic science and medical knowledge and case management, but it also assesses clinical reasoning, ethics, critical thinking and problem-solving skills.

The study team used 305 publicly available test questions from the June 2022 sample exam. None of the answers or related context was indexed on Google before January 1, 2022, so they would not be a part of the information on which ChatGPT trained. The study authors removed sample questions that had visuals and graphs, and they started a new chat session for each question they asked.

Students often spend hundreds of hours preparing, and medical schools typically give them time away from class just for that purpose. ChatGPT had to do none of that prep work.

The AI performed at or near passing for all the parts of the exam without any specialized training, showing “a high level of concordance and insight in its explanations,” the study says.

Tseng was impressed.

“There’s a lot of red herrings,” he said. “Googling or trying to even intuitively figure out with an open-book approach is very difficult. It might take hours to answer one question that way. But ChatGPT was able to give an accurate answer about 60% of the time with cogent explanations within five seconds.”

Dr. Alex Mechaber, vice president of the US Medical Licensing Examination at the National Board of Medical Examiners, said ChatGPT’s passing results didn’t surprise him.

“The input material is really largely representative of medical knowledge and the type of multiple-choice questions which AI is most likely to be successful with,” he said.

Mechaber said the board is also testing ChatGPT with the exam. The members are especially interested in the answers the technology got wrong, and they want to understand why.

“I think this technology is really exciting,” he said. “We were also pretty aware and vigilant about the risks that large language models bring in terms of the potential for misinformation, and also potentially having harmful stereotypes and bias.”

He believes that there is potential with the technology.

“I think it’s going to get better and better, and we are excited and want to figure out how do we embrace it and use it in the right ways,” he said.

Already, ChatGPT has entered the discussion around research and publishing.

The results of the medical licensing exam study were even written up with the help of ChatGPT. The technology was originally listed as a co-author of the draft, but Tseng says that when the study is published, ChatGPT will not be listed as an author because it would be a distraction.

Last month, the journal Nature created guidelines that said no such program could be credited as an author because “any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”

But an article published Thursday in the journal Radiology was written almost entirely by ChatGPT. It was asked whether it could replace a human medical writer, and the program listed many of its possible uses, including writing study reports, creating documents that patients will read and translating medical information into a variety of languages.

Still, it does have some limitations.

“I think it definitely is going to help, but everything in AI needs guardrails,” said Dr. Linda Moy, the editor of Radiology and a professor of radiology at the NYU Grossman School of Medicine.

She said ChatGPT’s article was pretty accurate, but it made up some references.

One of Moy’s other concerns is that the AI could fabricate data. It’s only as good as the information it’s fed, and with so much inaccurate information available online about things like Covid-19 vaccines, it could use that to generate inaccurate results.

Moy’s colleague Artie Shen, a graduating Ph.D. candidate at NYU’s Center for Data Science, is exploring ChatGPT’s potential as a kind of translator for other AI programs for medical imaging analysis. For years, scientists have studied AI programs from startups and larger operations, like Google, that can recognize complex patterns in imaging data. The hope is that these could provide quantitative assessments that could potentially uncover diseases, possibly more effectively than the human eye.

“AI can give you a very accurate diagnosis, but they will never tell you how they reach this diagnosis,” Shen said. He believes that ChatGPT could work with the other programs to capture its rationale and observations.

“If they can talk, it has the potential to enable those systems to convey their knowledge in the same way as an experienced radiologist,” he said.

Tseng said he ultimately thinks ChatGPT can enhance medical practice in much the same way online medical information has both empowered patients and forced doctors to become better communicators, because they now have to provide insight around what patients read online.

ChatGPT won’t replace doctors. Tseng’s group will continue to test it to learn why it creates certain errors and what other ethical parameters need to be put in place before using it for real. But Tseng thinks it could make the medical profession more accessible. For example, a doctor could ask ChatGPT to simplify complicated medical jargon into language that someone with a seventh-grade education could understand.

“AI is here. The doors are open,” Tseng said. “My fundamental hope is, it will actually make me and make us as physicians and providers better.”

Source link

#Paging #ChatGPT #artificial #intelligence #future #medicine #CNN