The AI wave: How Tamil cinema is embracing artificial intelligence tools

Senthil Nayagam has been besotted with actor Suriya for weeks. He has downloaded interviews and speeches of the actor, and has been feeding them to his many AI (artificial intelligence) tools in an attempt to “master Suriya’s voice”.

“Let’s try running it now,” says Senthil, rubbing his hands in glee as he quickly taps his laptop. He selects one of his favourite Ilaiyaraaja songs – ‘Nilave Vaa’, sung by SP Balasubrahmanyam, in the 1986 Tamil film Mouna Raagam – and presses some more keys.

Within a few seconds, the familiar strains of ‘Nilave Vaa’ play out, sung in Suriya’s distinct voice!

An AI-generated image of what could be the ultimate blockbuster: a film starring Rajinikanth and Kamal Haasan 
| Photo Credit:
Special Arrangement

All this stemmed from when Senthil asked himself a question a few months ago: Can I replace one person with another? Following that train of thought, he made the late SP Balasubramanyam sing the ‘Rathamaarey’ song, originally sung by Vishal Mishra, from Rajinikanth’s recent hit Jailer.

Senthil then went one step further; he used a face-swapping technique to replace Tamannaah with Simran for the foot-tapping ‘Kaavala’, Anirudh’s hit song from Jailer, a short video that generated more than 2 million views, especially after the two actresses also shared it.

For Senthil, who currently runs a generative AI company called Muonium Inc, AI is “a toy he is experimenting with”. Currently, he is making use of the technology to create a voice similar to AR Rahman’s to sing all the songs that he has composed, like ‘Usurey Poguthey’ from Raavanan, which was performed by singer Karthik. “This is actual work,” he admits, “We need to separate the instruments and the voice, clean any noise and then mix it back. I’ve got mixed feedback for my content, because some fans aren’t happy with the videos featuring people who have passed on. But audiences should understand that the possibilities with AI are exciting.”

It is. AI is slowly creeping into various facets of Tamil cinema, changing the way filmmakers vision and execute their projects. Like Senthil, there are many others dabbling in AI. Like Teejay-Sajanth Sritharan, for instance. A Srilankan Tamil living in the UK, he has also created voice models for many leading Tamil actors.

Using AI tools like Midjourney, Stable Diffusion, Chat GPT or a combination of Python and GPU, these AI creators are not just putting their work out there for audiences, but also collaborating with producers and directors.

Lyricist-dialogue writer Madhan Karky used multiple AI tools for his upcoming Suriya-starrer Kanguvafor concept designs and world building. “I also use AI as my writing assistant when I write stories or scenes. It saves a lot of time, because we have tools like Kaiber and Gen-2 that can create animation and lyric videos,” says the lyricist.

Suriya in a still from ‘Kanguva’

Suriya in a still from ‘Kanguva’
| Photo Credit:
Special Arrangement

Using a tool called SongR, Madhan created ‘En Mele’, the world’s first AI-composed Tamil song, now available on leading music streaming platforms. “I had to make it learn Tamil, which was very challenging,” he says, adding “In a few years, most AI tools will become well-versed in all languages in the world.”

Karky has another prediction: that, within a year, a movie completely generated using AI will release in theatres.

While that may take a while, in November, audiences will be able to watch about four minutes of AI-generated content in Tamil film Weapon, starring Sathyaraj. The makers decided to opt for this during the post-production stage, when they felt that a flashback portion would add value to the film. Using a software developed by the team, director Guhan Senniappan and team fed photos of the leads (Sathyaraj and Vasanth Ravi) to generate the sequences.

“It saves a lot of time,” says Guhan, who has previously worked on Sawaari (2016) and Vella Raja (2018), “Earlier, we would need a few days to create one frame, but with AI, we can experiment with four-five frames in a single day and get instant output. When you are working with strict deadlines, this is a boon. But, you need to input strong, accurate keywords for AI tools to generate the visuals you have in mind.”

Sathyaraj and director Guhan on the sets of Tamil film ‘Weapon’, which will feature an AI-generated portion

Sathyaraj and director Guhan on the sets of Tamil film ‘Weapon’, which will feature an AI-generated portion
| Photo Credit:
Special Arrangement

Expect AI to alter every stage of film making, including costumes. Mohamed Akram A of OrDrobe Apparels says that the Indian film industry will soon embrace AI to transform fashion in their projects and promotional activities. “Algorithms can be used to generate costume ideas that are not only visually stunning but also relevant to the storyline and character development,” he says. “Each character’s attire can be uniquely designed to reflect their personality, era, and storyline, enhancing the overall cinematic experience.”

OrDrobe, which debuts in film merchandising with their association with the makers of the upcoming Tamil film Nanban Oruvan Vandha Piragu, is keen to actively participate in the film space. “AI can also be used in fashion trend analysis and maintaining a digital wardrobe for characters, making it easier to recreate costumes for reshoots and to ensure character continuity throughout a film,” adds Mohamed.

The cost and time saved thanks to these processes might benefit producers in the long run.

While cash-rich producers can still opt for a big VFX team to do the job, that might not be a viable option for all, especially medium and small-scale film units. Using AI tools could help them achieve 80% of the desired output at one-third the cost spent on VFX, experts say.

But it does trigger a debate on ethics: English actor-comedian Stephen Fry recently lashed out at the makers of a historical documentary for faking his voice using AI.

Does this all mean that AI will someday substitute human intelligence and labour, even in the movies? No, feels Karky. “Human creativity does have an upper edge. Our experiences and the emotions we undergo are what makes us different.”

“AI empowers creators, if used properly. But human creativity does have an upper edge. Our experiences and the emotions we undergo make us different.  ”Madhan KarkyLyricist, Dialogue-Writer

“Using AI tools, you can achieve 80 percent of the same output at one-third the cost that you would spend on VFX.”Senthil NayagamAI creator

Source link

#wave #Tamil #cinema #embracing #artificial #intelligence #tools

Forbes Global CEO Conference: Artificial Intelligence Evolution Brings Individual Empowerment, Tech Experts Say

Artificial intelligence experts speaking at the Forbes Global CEO Conference in Singapore on Tuesday expressed optimism about the future of AI, despite some worries the fast-growing technology could make dramatic changes in business and society.

“I believe the current evolution of generative AI is a massive acceleration of a very long-term pattern of leveraging technology as a toolset,” Eduardo Saverin, cofounder and co-CEO of Singapore-based venture capital firm B Capital, said on a panel at the Forbes Global CEO Conference. “Where this potentially starts arriving into a phase change is this idea that through time, computers can effectively program themselves…we’re very early in that evolution, or that phase change, and it’s incredibly exciting.”

The Facebook (now Meta) cofounder, who topped this year’s Singapore’s 50 Richest list with a net worth of $16 billion, added, “What’s empowering about [AI technology] is that it’s driving in some ways a realism to the idea that the world can be personalized down to the level of one.”

This includes tailored content, such as the idea of a hyper-personalized social media newsfeed—one of Meta’s “key evolutions” during the company’s early days, Saverin notes—that allows users to scroll through relevant content based on their interests.

The other panelists were Meng Ru Kuok, group CEO and founder of Caldecott Music Group, Antoine Blondeau, cofounder and managing Partner of Alpha Intelligence Capital, and Rohan Narayana Murty, founder and CTO of Soroco.

In creative fields like the music industry, AI-driven developments are providing people the opportunity “to do things that they couldn’t do before, and do them at scale, and potentially autonomously,” said Kuok, who is also founder of music production app BandLab.

“Music has actually been using algorithms and AI and innovations and technology for a long time, whether it’s a transition from the recording studio, all the way to personal computing,” said Kuok. “Even as an operator, the speed and the unexpected nature of the technology shift has changed even all the old perspectives on the opportunity at hand.”

Still, developments are threatened by bad actors who may use AI tools to “create recursive, autonomous things” that introduce risks, added Kuok, citing concerns such as fraud related to music streaming. “I’m less worried about the computer, I’m worried more about the human,” he said. “That’s something for us to really think about, from safeguards…historically, it’s been humans who have been the problem as well as the solution.”

“Everything that is consumer-facing is going to be incredibly enhanced,” noted Blondeau, who worked on the project that became Apple’s voice assistant, Siri. These consumer-facing fields include healthcare and education, which he predicts AI will augment over the next few years. He raised the possibility of AI-powered drug discovery that could potentially identify variants of diseases before they emerge, or cures to debilitating conditions like cancer. “I always say that AI will save us before it kills us,” he said.

“AI will make us feel longer, it will make us hyper-productive…this is the hope, and it’s a massive hope,” Blondeau added. “The fear is that we’ll end up in a video game, right? We’ll have nothing much to do, and the machines will have to do the hard work.”

To Murty, some of the concerns surrounding AI may involve its integration of systems that emulate the way humans think. “I don’t think [AI] is cognition, and I think there’s a lot of confusion around this,” he said. AI operates “as a black box” to simulate certain parts of human cognition, but not its entirety. “When we start thinking about cognition, that’s the last refuge, or bastion of human difference in this world, it gets quite scary,” Murty added.

Yet “AI is the perfect tool” to unlock some of the problems with identifying areas of improvement within companies, leveraging data instead of questionnaires. “For the first time, we have an opportunity to affect every single organization, in terms of how they get work done, in terms of how they think,” Murty said. “The very question of how office work ought to be done differently or better is in some sense best answered by a machine, not a person.”

AI’s potential to outperform humans reflects how any rapid innovation brings a “potential for human displacement,” said Saverin, but AI can create a “win-win scenario” for both small and large businesses. “We are ultimately humans, and we’re going to want to experience the world and digest the world in a human way,” he said.

“These [AI] technologies will make corporations efficient, profit centers more efficient…and there will be an infinite path of potential learning and enablement of what you can do as an individual, but how you earn money, and how you become an active participant in income generation in the world will evolve,” Saverin said. “We need to be very careful to enable that evolution to go in the right direction.”

MORE FROM FORBES GLOBAL CEO CONFERENCE

MORE FROM FORBES21st Forbes Global CEO Conference Opens In SingaporeMORE FROM FORBESCP’s Dhanin Chearavanont Receives Malcolm S. Forbes Lifetime Achievement AwardMORE FROM FORBESIndonesia’s Finance Minister Upbeat On Growth As Domestic Demand ResilientMORE FROM FORBESSingapore Vows To Remain Vigilant Amid Money Laundering Probe: DPM Heng

Source link

#Forbes #Global #CEO #Conference #Artificial #Intelligence #Evolution #Brings #Individual #Empowerment #Tech #Experts

AI could be the great equaliser the less well-off parts of Europe need

The promise of a better life for those who were historically on the fringes means that investment into AI should be further supported, and not stifled, Cristian Gherasim writes.

Barely a day goes by without hearing about yet another mind-blowing artificial intelligence advance. The beauty of AI is that we all have access to it, more so than with any other technological discovery from past epochs. 

ADVERTISEMENT

Though still very much ahead, rich countries no longer hold the monopoly over this new invention and AI developments are happening all across the globe, owing more to a nation’s capacity to innovate than to its overall wealth.

Eastern Europe is no exception, and despite the region remaining Europe’s most impoverished, research and development in AI seem to have picked up speed in various sectors. 

If harnessed wisely, AI could provide a boost towards growth for a region battling decades of communist-era shortages and post-communist economic inequality and deprivation.

World’s first AI-powered government adviser is Romanian

Though still behind the western world, some central and eastern European countries made significant inroads in the AI sector.

For a few years now, Poland has been spearheading the fight against hate speech on the internet. 

In 2019, its Samurai Labs developed AI-based software that detects hate speech, violence and fake news across online media platforms. The tool proved particularly useful in the years following the Brexit vote with the UK Police ending up hiring the company to investigate anti-Polish content online. 

Aside from fake news and hate speech, this AI-powered tool has also been used to combat online paedophilia and other crimes.

In Romania, the Humans.ai start-up delivered the world’s first AI-based government adviser, ION, to help the Romanian prime minister understand the needs of constituents. 

The project done in collaboration with AI researchers and professors from Romania aims to get a better sense of public opinion and how the public reacts to certain events, key issues and policies. 

The company is branching out, partnering with research centres in the Middle East such as in the Emirati city of Ras Al-Khaimah (RAK), where it aims to reshape the tech landscape and create the first free zone and hub in the world dedicated exclusively to AI innovation and development. 

Furthermore, Humans.ai will provide blockchain technology for the region’s AI ecosystem and startups.

ADVERTISEMENT

Moldova’s pest management and Ukraine’s hi-tech warfare

Romania’s neighbour Moldova is also putting AI to good use, training it to detect pests and implement weed management for local crops. 

The program — developed by the local company DRON Assistance and financed by the United Nations — is being tested on a 73-hectare field in the village of Onitcani.

In Ukraine, artificial intelligence is already at the forefront of the country’s defence strategy against Russian aggression. 

AI helps identify Russian soldiers, track troop movement, establish new targets and intercept enemy communications, together with helping fend off Russian disinformation.

Drones and robots have already revolutionised not only the war in Ukraine but warfare in general. Ukraine is indeed a testing ground, a living lab for AI warfare. 

ADVERTISEMENT

This is also leading to the development of a strong tech civil sector where through partnerships, Ukrainian start-ups are growing.

What does Eastern Europe stand to benefit?

According to research by Goldman Sachs, AI could bring a near $7 trillion (€6.47tn) increase in annual global GDP over a ten-year period.

The potential for economic growth is limitless and eastern Europe can tap into that.

Some sectors are already witnessing AI-powered changes. Aside from its military use that we see at play in Ukraine, the technology can have a crucial role in shaping the region in years to come.

Agricultural drones flown by AI software sprinkle on average up to 40% less active substance allowing for more accurate spraying and safer crops. 

ADVERTISEMENT

These AI-powered drones present a far more ecological option to farmers who thus have no need for tractors and avoid burning fossil fuels that pollute the crops.

AI could also help the region in developing better waste management with smart recycle bins and facilities helping to sort and collect rubbish more efficiently.

Healthcare is another thorny problem, the region being notorious for its lack of doctors with Romania ranked as having the worst healthcare system in Europe. 

Aside from poor financing and systemic corruption, hospitals in the region are facing a severe shortage of physicians. 

AI can help supplement a dwindling number of medics so that more people can access medical supervision, with studies showing that the technology is capable of performing tasks as well or better than humans.

AI could turn out to be the ‘great equaliser’

AI can undoubtedly be a force for good in Eastern Europe as much as anywhere else, but when so much is happening so fast, the conversation tends to become too broad and at times so abstract that those trying to make sense of it end up being exposed to the fringes, either loving or loathing the technology.

As with every new tool, however, cautious optimism should drive the approach as well as a better understanding of what that new tool can do for you, your home country and your region. 

The European Union has also a role to play by both fostering the development of AI and making Europe a leading competitor alongside tech juggernauts like the US and China and also keeping an eye on potential risks through thorough checks and balances.

At the same time, the promise of a better life for those who were historically on the fringes means that investment into AI should be further supported, and not stifled.

If we do this carefully and with our joint progress in mind, we could see the harnessing of AI turn out to be the great equaliser the less well-off parts of Europe need — and something our entire continent would benefit from. 

Cristian Gherasim is an analyst, consultant and journalist with over 15 years of experience focusing on Eastern and Central European affairs.

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

Source link

#great #equaliser #welloff #parts #Europe

Having AI present the news might be exactly what journalism needs

As opposed to the infamous “deep fake” phenomenon, AI avatar news — known as Deep Real — can represent a commitment to truth, transparency, and the pursuit of journalistic integrity, Miri Michaeli writes.

In the ever-evolving world of journalism, a seismic shift is underway—one that challenges conventions, disrupts traditional practices, and, if used correctly, could herald the dawn of a new era. 

ADVERTISEMENT

The adoption of artificial intelligence stands at the forefront of this transformative wave. Its impact on the future of journalism is undeniable, potentially threatening to erode the very essence of journalism, while at the same time revolutionising the way journalists connect with audiences and deliver news with enhanced clarity and global reach.

As we witness the emergence of the world’s first fully AI automated news edition with digital avatar presenters, concerns about the implications of this technology must be addressed in order to unlock its true potential. 

As opposed to the infamous “deep fake” phenomenon, AI avatar news — known as Deep Real — can represent a commitment to truth, transparency, and the pursuit of journalistic integrity. 

While deep fakes epitomise deception and trickery, Deep Real can be the antithesis — a manifestation of genuine journalism leveraging the power of technology.

Building trust through transparency

The rise of AI-generated avatars and the use of actors’ or journalists’ images without consent or compensation raise legitimate fears about the exploitation of their identities. 

The recent Black Mirror episode “Joan Is Awful,” which saw Selma Hayek embarrassing herself in a church, serves as an explosive reminder of the dystopian consequences that could ensue if these concerns are left unaddressed.

One of the primary criticisms levelled against the tech is that it will lead to the destruction of trust in journalism. 

Detractors argue that the use of AI-generated avatars and automated scripts will create a sense of artificiality and detachment from reality. 

However, if implemented with journalistic and intellectual rigour, it has the potential to enhance transparency and build trust in unprecedented ways. 

AI news is a chance to break with traditional boundaries

By leveraging AI technology, we can provide audiences with a deeper understanding of the news-making process, showcasing the data sources and algorithms used to generate content.

Rather than weakening the role of human input, AI avatar reporters represent a groundbreaking leap forward — a fusion of human innovation and integrity that holds immense potential for journalism. 

ADVERTISEMENT

The technology allows news to break traditional boundaries, delivering reporting with enhanced clarity and a global reach that was previously unimaginable.

At the same time, rather than diminishing the role of human journalists, digital clones empower them to delve deeper into their craft. 

By automating certain aspects of news production, Deep Real can liberate journalists, allowing them to focus on investigations and analysis, and cultivating meaningful connections with their sources. 

It is a tool that enhances the storytelling abilities of journalists, amplifying their voices and unleashing their creativity.

What about ethics and integrity in AI journalism?

In the realm of digital avatar journalists, ethical considerations take on paramount importance. 

ADVERTISEMENT

To effectively harness the potential of AI-driven journalism while upholding principles of transparency and accountability, robust guidelines and industry-wide standards must be established. 

Transparency should be a guiding principle, ensuring audiences are aware of the use of AI-generated avatars and distinguishing them from human journalists.

News organisations must also maintain accountability, taking responsibility for the content produced by digital avatars and adhering to rigorous fact-checking and quality control measures. 

Preserving journalistic integrity necessitates a commitment to high ethical standards, exercising critical judgment, and recognizing the limitations of AI technology as a tool that complements, but does not replace, human journalists.

The responsible use of AI technology in journalism demands ongoing critical engagement. While AI presents exciting opportunities to push the boundaries of journalism, its adoption should be approached thoughtfully. 

ADVERTISEMENT

Regular assessments of its impact on society, democracy, and the profession are essential to maintain a healthy balance between the benefits of AI and the core principles of journalism. 

By embracing AI with a strong ethical framework, we can shape a future where technology and journalism converge harmoniously, enriching storytelling, expanding reach, and enhancing democratic discourse.

A new era for journalism

In the face of these advancements, the future of journalism is not dark but brighter than ever. AI presents an unparalleled opportunity to democratize news, personalize storytelling, and amplify the impact of journalists. 

It is a tool that empowers journalists to connect with global audiences, transcend boundaries, and navigate the complexities of our world with greater efficiency and precision.

As we embrace this new technology, let us fully seize the immense potential it offers. Let us champion the timeless values of truth, accuracy, and transparency in this new era of journalism. 

Deep Real represents the remarkable convergence of human ingenuity and technological innovation — a powerful force that propels us toward a future where storytelling knows no bounds.

By integrating Deep Real into the fabric of journalism, we can usher in an era of heightened connectivity, inclusivity, and impact. 

It is a call to journalists, news organisations, and society as a whole to harness this transformative tool responsibly, with an unwavering commitment to the principles that have guided journalism throughout history.

Miri Michaeli, a veteran Israeli journalist, is the co-founder and chief news anchor of ACT News, a pioneer in the field of AI-powered news broadcasts using digital avatar presenters.

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

Source link

#present #news #journalism

ChatGPT Fever Spreads to US Workplace as Firms Raise Concerns Over Leaks

Many workers across the US are turning to ChatGPT to help with basic tasks, a Reuters/Ipsos poll found, despite fears that have led employers such as Microsoft and Google to curb its use. Companies worldwide are considering how to best make use of ChatGPT, a chatbot program that uses generative AI to hold conversations with users and answer myriad prompts. Security firms and companies have raised concerns, however, that it could result in intellectual property and strategy leaks.

Anecdotal examples of people using ChatGPT to help with their day-to-day work including drafting emails, summarising documents, and doing preliminary research.

Some 28 percent of respondents to the online poll on artificial intelligence (AI) between July 11 and 17 said they regularly use ChatGPT at work, while only 22 percent said their employers explicitly allowed such external tools.

The Reuters/Ipsos poll of 2,625 adults across the United States had a credibility interval, a measure of precision, of about 2 percentage points.

Some 10 percent of those polled said their bosses explicitly banned external AI tools, while about 25 percent did not know if their company permitted the use of the technology.

ChatGPT became the fastest-growing app in history after its launch in November. It has created both excitement and alarm, bringing its developer OpenAI into conflict with regulators, particularly in Europe, where the company’s mass data-collecting has drawn criticism from privacy watchdogs.

Human reviewers from other companies may read any of the generated chats, and researchers found that similar artificial intelligence AI could reproduce data it absorbed during training, creating a potential risk for proprietary information.

“People do not understand how the data is used when they use generative AI services,” said Ben King, VP of customer trust at corporate security firm Okta.

“For businesses, this is critical, because users don’t have a contract with many AIs – because they are a free service – so corporates won’t have to run the risk through their usual assessment process,” King said.

OpenAI declined to comment when asked about the implications of individual employees using ChatGPT but highlighted a recent company blog post assuring corporate partners that their data would not be used to train the chatbot further unless they gave explicit permission.

When people use Google’s Bard it collects data such as text, location, and other usage information. The company allows users to delete past activity from their accounts and request that content fed into the AI be removed. Alphabet-owned Google declined to comment when asked for further detail.

Microsoft did not immediately respond to a request for comment.

‘HARMLESS TASKS’

A US-based employee of Tinder said workers at the dating app used ChatGPT for “harmless tasks” like writing emails even though the company does not officially allow it.

“It’s regular emails. Very non-consequential, like making funny calendar invites for team events, farewell emails when someone is leaving … We also use it for general research,” said the employee, who declined to be named because they were not authorized to speak with reporters.

The employee said Tinder has a “no ChatGPT rule” but that employees still use it in a “generic way that doesn’t reveal anything about us being at Tinder”.

Reuters was not able independently confirm how employees at Tinder were using ChatGPT. Tinder said it provided “regular guidance to employees on best security and data practices”.

In May, Samsung Electronics banned staff globally from using ChatGPT and similar AI tools after discovering an employee had uploaded sensitive code to the platform.

“We are reviewing measures to create a secure environment for generative AI usage that enhances employees’ productivity and efficiency,” Samsung said in a statement on August 3.

“However, until these measures are ready, we are temporarily restricting the use of generative AI through company devices.”

Reuters reported in June that Alphabet had cautioned employees about how they use chatbots including Google’s Bard, at the same time as it markets the program globally.

Google said although Bard can make undesired code suggestions, it helps programmers. It also said it aimed to be transparent about the limitations of its technology.

BLANKET BANS

Some companies told Reuters they are embracing ChatGPT and similar platforms while keeping security in mind.

“We’ve started testing and learning about how AI can enhance operational effectiveness,” said a Coca-Cola spokesperson in Atlanta, Georgia, adding that data stays within its firewall.

“Internally, we recently launched our enterprise version of Coca-Cola ChatGPT for productivity,” the spokesperson said, adding that Coca-Cola plans to use AI to improve the effectiveness and productivity of its teams.

Tate & Lyle Chief Financial Officer Dawn Allen, meanwhile, told Reuters that the global ingredients maker was trialing ChatGPT, having “found a way to use it in a safe way”.

“We’ve got different teams deciding how they want to use it through a series of experiments. Should we use it in investor relations? Should we use it in knowledge management? How can we use it to carry out tasks more efficiently?”

Some employees say they cannot access the platform on their company computers at all.

“It’s completely banned on the office network like it doesn’t work,” said a Procter & Gamble employee, who wished to remain anonymous because they were not authorized to speak to the press.

P&G declined to comment. Reuters was not able independently to confirm whether employees at P&G were unable to use ChatGPT.

Paul Lewis, chief information security officer at cyber security firm Nominet, said firms were right to be wary.

“Everybody gets the benefit of that increased capability, but the information isn’t completely secure and it can be engineered out,” he said, citing “malicious prompts” that can be used to get AI chatbots to disclose information.

“A blanket ban isn’t warranted yet, but we need to tread carefully,” Lewis said. 

© Thomson Reuters 2023  


Affiliate links may be automatically generated – see our ethics statement for details.

Source link

#ChatGPT #Fever #Spreads #Workplace #Firms #Raise #Concerns #Leaks

AI recap this month: Drone ‘kills’ operator; DeepMind’s speed up

A US Air Force Reaper drone

APFootage/Alamy

Reports of AI drone “killing” its operator amounted to nothing

This month we heard about a fascinating AI experiment from a US Air Force colonel. An AI-controlled drone trained to autonomously carry out bombing missions had turned on its human operator when told not to attack targets; its programming prioritised successfully carrying out missions, so it saw human intervention as an obstacle in its way and decided to forcefully take it out.

The only problem with the story was that it was nonsense. Firstly, as the colonel told it, the test was a simulation. Secondly, a US Air Force statement was hastily issued to clarify that the colonel, speaking at a UK conference, had “mis-spoke” and that no such tests had been carried out.

New Scientist asked why people are so quick to believe AI horror stories, with one expert saying it was partly down to our innate attraction to “horror stories that we like to whisper around the campfire”.

The problem with this kind of misconstrued story is that it is so compelling. The “news” was published around the world before any facts could be checked, and few of those publications had any interest in later setting the record straight. AI presents a genuine danger to society in many ways and we need informed debate to explore and prevent them, not sensationalism.

AI can optimise computer code

Deepmind

DeepMind AI speeds up algorithm that could have global impact on computer power

AI has brought surprise after surprise in recent years, showing itself capable of spitting out an essay on any given topic, creating photorealistic images from scratch and even writing functional source code. So you would be forgiven for not getting too excited about news of a DeepMind AI slightly improving a sorting algorithm.

But dig deeper and the work is interesting and has solid real-world applications. Sorting algorithms are run trillions of times around the world and are so commonly used in all kinds of software that they are written into libraries that coders can call on as and when needed to avoid having to reinvent the wheel. These filed-away algorithms had been refined and tweaked by humans for so long that they were considered complete and as efficient as possible.

This month, DeepMind’s AI found an improvement that can speed up sorting by as much as 70 per cent, in the right scenario. Any improvement that can be rolled out to every computer, smartphone or anything with a computer chip can bring huge savings in energy use and computation time. How many more commonly used algorithms can AI find efficiency gains in? Time will tell.

Wind power could be turbocharged by AI

Mimadeo/Shutterstock

AI could boost output of all wind turbines around the world

While DeepMind is searching for efficiency gains in source code, others are using AI to find them in machines. Wind turbines work best when directly facing oncoming wind, but the breeze obstinately keeps changing direction. Currently turbines use a variety of techniques to maintain efficiency, but it seems that AI may be able to do a slightly better job.

Researchers trained an AI on real-world data about wind direction and found that it could come up with a strategy that raised efficiency by keeping the turbine facing the right way more of the time. This involved more rotating, which used more energy, but even taking that into account they were able to squeeze 0.3 per cent more power from the turbines.

This figure may not make for a great headline, but it’s enough to boost electricity production by 5 terawatt-hours a year – about the same amount as is consumed annually by Albania, or 1.7 million average UK homes – if rolled out to every turbine around the world.

2AN93AM Switched on caps lock button on keyboard, typing capital letters, toggle key

A surprising way to defeat ChatGPT

Ievgen Chabanov/Alamy

Capital letter test is a foolproof way of sorting AIs from humans

The Turing test is a famous way of assessing the intelligence of a machine: can a human conversing through a text interface tell whether they are speaking to another human or an AI? Well, large language models like ChatGPT are now pretty adept at holding realistic conversations so we perhaps need a new test.

In recent years we have seen a suite of 204 tests proposed as a kind of new Turing Test, covering subjects such as mathematics, linguistics and chess. But a much simpler method has just been published in a paper, where superfluous upper case letters and words are added to sensical statements in an attempt to trip up AI.

Give a human a phrase such as “isCURIOSITY waterARCANE wetTURBULENT orILLUSION drySAUNA?” and they are likely to notice that the lower case letters alone form a logical sentence. But an AI reading the same input would be flummoxed, researchers showed. Five large language models, including OpenAI’s GPT-3 and ChatGPT, and Meta’s LLaMA, failed the test.

But other experts point out that now the test exists, AI can be trained to understand it, and so will pass in the future. Distinguishing AI from humans could become a cat-and-mouse game with no end.

Could the European Union set the future course for AI?

iStockphoto Copyright:

What is the future of AI? Google and the EU have very different ideas

Regulators and tech companies don’t seem to be pulling in the same direction on AI. While some industry players have called for a halt to research until the dangers are better understood, most legislators are pushing safeguarding rules to ensure it can progress safely – and lots of tech firms are ploughing ahead at full speed to commercially release AI.

Politicians in the EU have agreed an updated version of its AI Act, which has been years in the making – the president of the European Commission, Ursula von der Leyen, promised to urgently bring in AI legislation when she was elected in 2019. The laws will now require companies to disclose any copyright content that was used to train generative AI such as ChatGPT.

On the other hand, companies like Google and Microsoft are ploughing on with rolling out AI to many of their products, worried about being left behind in a revolution that could rival the birth of the internet.

While technology has always outpaced legislation, leaving society struggling to ensure harms are minimised, AI really is moving at a surprising pace. And the results of its commercial roll-out could be catastrophic: Google has found that its output can be unreliable even when cherry-picked for advertising. The potential benefits of AI are undisputed, but the trick will be to make sure they outweigh the harms.

Topics:

Source link

#recap #month #Drone #kills #operator #DeepMinds #speed

How should we regulate generative AI, and what will happen if we fail?

By Rohit Kapoor, Vice Chairman and CEO, EXL

Regulation must encourage collaboration and research between all the major players, from experts in the field to policy-makers and ethicists, Rohit Kapoor writes.

Generative AI is experiencing rapid growth and expansion. 

There’s no question as to whether this technology will change the world — all that remains to be seen is how long it will take for the transformative impact to be realised and how exactly it will manifest in each industry and niche. 

Whether it’s fully automated and targeted consumer marketing, medical reports generated and summarised for doctors, or chatbots with distinct personality types being tested by Instagram, generative AI is driving a revolution in just about every sector.

The potential benefits of these advancements are monumental. Quantifying the hype, a recent report by Bloomberg Intelligence predicted an explosion in generative AI market growth, from $40 billion (€36.5bn) in 2022 to 1.3 trillion (€1.18tn) in the next ten years. 

But in all the excitement to come, it’s absolutely critical that policy-makers and corporations alike do not lose sight of the risks of this technology.

These large language models, or LLMs, present dangers which not only threaten the very usefulness of the information they produce but could also prove threatening in entirely unintentional ways — from bias to blurring the lines between real and artificial to loss of control.

Who’s responsible?

The responsibility for taking the reins on regulation falls naturally with governments and regulatory bodies, but it should also extend beyond them. The business community must self-govern and contribute to principles that can become regulations while policy-makers deliberate.

Two core principles should be followed as soon as possible by those developing and running generative AI, in order to foster responsible use and mitigate negative impacts. 

First, large language models should only be applied to closed data sets to ensure safety and confidentiality. 

Second, all development and adoption of use cases leveraging generative AI should have the mandatory oversight of professionals to ensure “humans in the loop”.

These principles are essential for maintaining accountability, transparency, and fairness in the use of generative AI technologies.

From there, three main areas will need attention from a regulatory perspective.

Maintaining our grip on what’s real

The capabilities of generative AI to mimic reality are already quite astounding, and it’s improving all the time. 

So far this year, the internet has been awash with startling images like the Pope in a puffer jacket or the Mona Lisa as she would look in real life. 

And chatbots are being deployed in unexpected realms like dating apps — where the introduction of the technology is reportedly intended to reduce “small talk”.

The wider public should feel no guilt in enjoying these creative outputs, but industry players and policy-makers must be alive of the dangers of this mimicry. 

Amongst them are identity theft and reputational damage. 

Distinguishing between AI-generated content and content genuinely created by humans is a significant challenge, and regulation should consider the consequences and surveillance aspects of it.

Clear guidelines are needed to determine the responsibility of platforms and content creators to label AI-generated content. 

Robust verification systems like watermarking or digital signatures would support this authentication process.

Tackling imperfections that lead to bias

Policy-makers must set about regulating the monitoring and validation of imperfections in the data, algorithms and processes used in generative AI. 

Bias is a major factor. Training data can be biased or inadequate, resulting in a bias in the AI itself. 

For example, this might cause a company chatbot to deprioritise customer complaints that come from customers of a certain demographic or a search engine to throw up biased answers to queries. And biases in algorithms can perpetuate those unfair outcomes and discrimination.

Regulations need to force the issue of transparency and push for clear documentation of processes. This would help ensure that processes can be explained and that accountability is upheld. 

At the same time, it would enable scrutiny of generative AI systems, including safeguarding of intellectual property (IP) and data privacy — which, in a world where data is the new currency, is crucially important.

On top of this, regulating the documentation involved would help prevent “hallucinations” by AI — which are essentially where an AI gives a response that is not justified by the data used to train it.

Preventing the tech from becoming autonomous and uncontrollable

An area for special caution is the potential for an iterative process of AI creating subsequent generations of AI, eventually leading to AI that is misdirected or compounding errors. 

The progression from first-generation to second- and third-generation AI is expected to occur rapidly. 

The fundamental requirement of the self-declaration of AI models, where each model openly acknowledges its AI nature, is of utmost importance. 

However, enabling and regulating this self-declaration poses a significant practical challenge. One approach could involve mandating hardware and software companies to implement hardcoded restrictions, allowing only a certain threshold of AI functionality. 

Advanced functionality above such a threshold could be subject to an inspection of systems, audits, testing for compliance with safety standards, restrictions on degrees of deployment and levels of security, etc. Regulators should define and enforce these restrictions to mitigate risks.

We should be acting quickly and together

The world-changing potential of generative AI demands a coordinated response. 

If each country and jurisdiction develops its own rules, the adoption of the technology — which has the potential for enormous good in business, medicine, science and more — could be crippled. 

Regulation must encourage collaboration and research between all the major players, from experts in the field to policy-makers and ethicists. 

With a coordinated approach, the risks can be sensibly mitigated, and the full benefits of generative AI realised, unlocking its huge potential.

Rohit Kapoor is the Vice Chairman and CEO of EXL, a data analytics and digital operations and solutions company.

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

Source link

#regulate #generative #happen #fail

AI Is Helping Us Read Minds, but Should We?

Since mind reading has only existed in the realms of fantasy and fiction, it seems fair to apply the phrase to a system that uses brain scan data to decipher stories that a person has read, heard, or even just imagined. It’s the latest in a series of spooky linguistic feats fueled by artificial intelligence, and it’s left people wondering what kinds of nefarious uses humanity will find for such advances.

Even the lead researcher on the project, computational neuroscientist Alexander Huth, called his team’s sudden success with using noninvasive functional magnetic resonance imaging to decode thoughts “kind of terrifying” in the pages of Science.

But what’s also terrifying is the fact that any of us could come to suffer the horrific condition the technology was developed to address — paralysis so profound that it robs people of the ability even to speak. That can happen gradually through neurological diseases such as ALS or suddenly, as with a stroke that rips away all ability to communicate in an instant. Take for example, the woman who described an ordeal of being fully aware for years while treated as a vegetable. Or the man who recounted being frozen, terrified and helpless as a doctor asked his wife if they should withdraw life support and let him die.

Magazine editor Jean-Dominique Bauby, who suffered a permanent version of the condition, used a system of eye blinks to write the book The Diving Bell and the Butterfly. What more could he have done given a mind decoder?

Each mind is unique, so the system developed by Huth and his team only works after being trained for hours on a single person. You can’t aim it at a someone new and learn anything, at least for now, Huth and collaborator Jerry Tang explained last week in a press event leading up to a publication of their work in Monday’s Nature Neuroscience.

And yet their advance opens prospects that are both scary and enticing: A better understanding of the workings of our brains, a new window into mental illness, and maybe a way for us to know our own minds. Balanced against that is the concern that one day such technology may not require an individual’s consent, allowing it to invade the last refuge of human privacy.

Huth, who is an assistant professor at the University of Texas, was one of the first test subjects. He and two volunteers had to remain motionless for a total of 16 hours each in a functional MRI, which tracks brain activity through the flow of oxygenated blood, listening to stories from The Moth Radio Hour and the Modern Love podcast, chosen because they tend to be enjoyable and engaging.

This trained the system, which produced a model for predicting patterns of brain activity associated with different sequences of words. Then there was a trial-and-error period, during which the model was used to reconstruct new stories from the subjects’ brain scans, harnessing the power of a version of ChatGPT to predict which word would likely follow from another.

Eventually the system was able to “read” brain scan data to decipher the gist of what the volunteers had been hearing. When the subjects heard, “I don’t have my driver’s license yet,” the system came up with, “she has not even started to learn to drive.” For some reason, Huth explained, it’s bad with pronouns, unable to figure out who did what to whom.

Weirder still, the subjects were shown videos with no sound, and the system could make inferences about what they were seeing. In one, a character kicked down another, and the system used the brain scan to come up with, “he knocked me to the ground.” The pronouns seemed scrambled, but the action was spookily on target.

The people in the scanner might never have been thinking in words at all. “We’re definitely getting at something deeper than language,” Tang said. “There’s a lot more information in brain data than we initially thought.”

This isn’t a rogue lab doing mad science but part of a long-term effort that’s been pursued by scientists around the world. In a 2021 The New Yorker article, researchers described projects leading up to this breakthrough. One shared a vision of a Silicon Valley-funded endeavor that could streamline the cumbersome functional MRI scanner into a wearable “thinking hat.” People would wear the hat, along with sensors, to record their surroundings to decode their inner worlds and mind meld with others — even perhaps communicate with other species. The recent breakthroughs make this future seem closer.

For something that’s never existed, mind reading seems to crop up regularly in popular culture, often reflecting a desire for lost or never-realized connection, as Gordon Lightfoot sang of in If You Could Read my Mind. We envy the Vulcans their capacity for mind melding.

Historical precedent, however, warns that people can do harm by simply taking advantage of the belief that they have a mind-reading technology — just as authorities have manipulated juries, crime suspects, job candidates and others with the belief that a polygraph is an accurate lie detector. Scientific reviews have shown that the polygraph does not work as people think it does. But then, scientific studies have shown our brains don’t work the way we think they do either.

So, the important work of giving voice back to people whose voices have been lost to illness or injury must be undertaken with deep thought for ethical considerations; and an awareness of the many ways in which that work can be subverted. Already there’s a whole field of neuroethics, and experts have evaluated the use of earlier, less effective versions of this technology. But this breakthrough alone warrants a new focus. Should doctors or family members be allowed to use systems such as Huth’s to attempt to ask about a paralyzed patient’s desire to live or die? What if it reports back that the person chose death? What if it misunderstood? These are questions we all should start grappling with.

© 2023 Bloomberg LP


OnePlus recently launched its first tablet in India, the OnePlus Pad, which is only sold in a Halo Green colour option. With this tablet, OnePlus has stepped into a new territory that’s dominated by Apple’s iPad. We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.

(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)

Affiliate links may be automatically generated – see our ethics statement for details.

Source link

#Helping #Read #Minds

Exclusive: Confluent Cofounder Neha Narkhede’s New Fraud Detecting Firm Oscilar Emerges From Stealth

One of America’s most successful women entrepreneurs is at it again, striking out this time in partnership with her husband and giving herself the CEO title for the first time. Neha Narkhede, a software engineer who cofounded data-streaming software firm Confluent in 2014 and served as its chief technology and product officer for more than five years, has a new company coming out of stealth Thursday, she shared exclusively with Forbes.

Narkhede, 38, and her husband Sachin Kulkarni, 39, a former engineering executive at Meta Platforms, founded Oscilar in 2021 to decrease the risks involved with online transactions—mainly the risk of loan defaults and fraud with things like financial transactions and insurance—using artificial intelligence. They are funding it themselves with $20 million, have hired about two dozen employees and say they have signed up dozens of customers, many of which are fintech firms with an average of 500 employees. Narkhede is the chief executive officer and Kulkarni is chief technology officer of the remote-work-first firm, which has a small office in Palo Alto, California.

Narkhede says the goal with Oscilar is “making the internet safer.” To do that, she and Kulkarni spoke to 100 risk experts at fintech and other companies and learned that existing risk models rely on incomplete, outdated information about user behavior and do not always use the most updated machine learning techniques. They believe they can protect online transactions from fraud and theft more quickly and accurately with less engineering support than others by combining well-sourced data with AI. The problem is real and growing. U.S. consumers reported losing $8.8 billion to fraud in 2022, an increase of nearly $2.6 billion over 2021, according to the Federal Trade Commission.

“The key benefit is that we remove the need for a company to use engineers for risk assessments since no coding is required,” Narkhede says. “Businesses decide ahead of time what data they want to be analyzing, and we set up the programming to ensure our AI technology brings in the data to advise on the risk of every transaction, leaving it to the risk analysts so they run tests and approve tweaks to the model.”

Narkhede is a born-and-bred engineer. After earning her bachelor’s in computer science from Savitribai Phule Pune University in India, she immigrated to the United States for a master’s degree in computer science from Georgia Tech in Atlanta, from which she graduated in 2007. She then had stints as a software engineer at Oracle and LinkedIn.

At LinkedIn, she and two colleagues—Jay Kreps and June Rao—co-created open source messaging system Apache Kafka to handle the professional networking site’s huge amount of incoming data. In 2014, the Apache Kafka founders left LinkedIn to found Confluent, which helps organizations process large amounts of data using Apache Kafka. By early 2019, Confluent had raised more than $200 million from venture capital firms; that helped Narkhede land on Forbes’ annual list of America’s Richest Self-Made Women in 2019 (and stay on since then). Confluent went public in June 2021, spiked to a $24.75 billion market capitalization and made Narkhede a billionaire for several months, before the stock fell 77%. She sold close to $170 million worth of Confluent stock (before taxes) prior to and after Confluent’s 2021 IPO. Forbes estimates that she’s currently worth about $475 million.

In January 2020, she stepped down as Confluent CTO but kept her board seat. Back then, the idea for Oscilar had already taken root. “When I was involved in Confluent day-to-day, I saw companies that use Apache Kafka struggle with building their fraud and [credit] risk decisioning systems,” Narkhede says. “That’s when the seed was planted in my brain.”

The problem Oscilar is meant to address, Narkhede describes, is that existing risk detection systems have a hard time pulling together disparate data sources, can be slow to adapt to new input and can be hard to customize. Oscilar’s product—a constantly-training, no-code model that users can, alongside Oscilar’s team, customize and fine-tune—attempts to address the gaps Narkhede sees in existing systems.

Notably, the company received offers of venture capital cash—“all inbound interest,” Narkhede says—but she turned them down in favor of bootstrapping. “Self-funding has provided us with the autonomy to move fast,” Narkhede says, adding she will likely be open to outside funding in the far-out future. She says the initial funding should provide Oscilar with “several years of runway.”

Narkhede and Kulkarni’s 9-to-5 these days is 9 p.m. to 5 a.m.—their sleeping hours. To manage a team with employees spanning North America and Europe on top of their two-year-old son, the couple tucks in for the night at 9 p.m. and begins their work days at 5 a.m., Kulkarni says. They’ve done so ever since they started Oscilar—around the same time their son was born.

To run Oscilar, Narkhede and Kulkarni made sure they have a clear separation of responsibilities, advice they received after consulting other cofounder couples, according to Kulkarni. Even though both have highly technical backgrounds, he is responsible for the engineering side of the company, while Narkhede spends more time on the operations and clients.

Oscilar has plenty of competition from companies like DataVisor, Provenir, Sift and Alloy, as well as bigger outfits like Google that have the bandwidth to build their own risk models.

Like most risk-detecting models, Oscilar uses a combination of customer biographical information, customer transaction history and third-party data from credit bureaus and other sources. Oscilar also touts something called a “semi-supervised” machine learning algorithm, which means it combines labeled data, which includes an “outcome” like credit risk score, and unlabeled data—which does not, and is therefore easier to process—into one model. That approach, too, is not necessarily unique.

But Oscilar’s team, which customer Henry Shi, chief operating officer at $1 billion (sales) fintech firm Super, describes as super responsive and easy to work with, is a large part of what makes the company and its product stand out, says Shi. Oscilar’s 25-person team has members with data science or engineering experience—often specifically in building AI risk models—at places like Google, Uber, Meta and Confluent.

The “secret sauce,” as Narkhede puts it, is the team’s technical ability to gather high-quality data, create a comprehensive understanding of a user’s behavior, and build quickly and automatically updating AI models. Narkhede says she uses the terms AI and ML—for machine learning— interchangeably.

As Oscilar looks toward its first weeks out of stealth, the company is actively hiring and aims to further expand its customer base to fintech companies of different sizes and across different sectors—using its product to lead the way despite its CEO’s big reputation.

Super COO Shi, for example, who met Narkhede at a fintech conference in fall 2022 and became a customer soon afterward, was first drawn in by Oscilar’s product because it was low-code, customizable and dynamic, he says. “I didn’t even realize she was the cofounder of Confluent until the very end of our conversation.”

This article was updated on March 30 to clarify that Uber was a customer, not Lyft.

Source link

#Exclusive #Confluent #Cofounder #Neha #Narkhedes #Fraud #Detecting #Firm #Oscilar #Emerges #Stealth

EU’s AI Act vote looms. We’re still not sure how free AI should be


The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

The European Union’s long-expected law on artificial intelligence (AI) is expected to be put to the vote at the European Parliament at the end of this month. 

But Europe’s efforts to regulate AI could be nipped in the bud as lawmakers struggle to agree on critical questions regarding AI definition, scope, and prohibited practices. 

Meanwhile, Microsoft’s decision this week to scrap its entire AI ethics team despite investing $11 billion (€10.3bn) into OpenAI raises questions about whether tech companies are genuinely committed to creating responsible safeguards for their AI products.

At the heart of the dispute around the EU’s AI Act is the need to provide fundamental rights, such as data privacy and democratic participation, without restricting innovation. 

How close are we to algocracy?

The advent of sophisticated AI platforms, including the launch of ChatGPT in November last year, has sparked a worldwide debate on AI systems. 

It has also forced governments, corporations and ordinary citizens to address some uncomfortable existential and philosophical questions. 

How close are we to becoming an _algocracy -_— a society ruled by algorithms? What rights will we be forced to forego? And how do we shield society from a future in which these technologies are used to cause harm? 

The sooner we can answer these and other similar questions, the better prepared we will be to reap the benefits of these disruptive technologies — but also steel ourselves against the dangers that accompany them.

The promise of technological innovation has taken a major leap forward with the arrival of new generative AI platforms, such as ChatGPT and DALL-E 2, which can create words, art and music with a set of simple instructions and provide human-like responses to complex questions.

These tools could be harnessed as a power for good, but the recent news that ChatGPT passed a US medical-licensing exam and a Wharton Business School MBA exam is a reminder of the looming operational and ethical challenges. 

Academic institutions, policy-makers and society at large are still scrambling to catch up.

ChatGPT passed the Turing Test — and it’s still in its adolescence

Developed in the 1950s, the so-called Turing Test has long been the line in the sand for AI. 

The test was used to determine whether a computer is capable of thinking like a human being. 

Mathematician and code-breaker Alan Turing was convinced that one day a human would be unable to distinguish between answers given by a real person and a machine. 

He was right — that day has come. In recent years, disruptive technologies have advanced beyond all recognition. 

AI technologies and advanced machine-learning chatbots are still in their adolescence, they need more time to bloom. 

But they give us a valuable glimpse of the future, even if these glimpses are sometimes a bit blurred. 

The optimists among us are quick to point to the enormous potential for good presented by these technologies: from improving medical research and developing new drugs and vaccines to revolutionising the fields of education, defence, law enforcement, logistics, manufacturing, and more. 

However, international organisations such as the EU Fundamental Rights Agency and the UN High Commissioner for Human Rights have been right to warn that these systems can often not work as intended. 

A case in point is the Dutch tax authority’s SyRI system which used an algorithm to spot suspected benefits fraud in breach of the European Convention on Human Rights.

How to regulate without slowing down innovation?

At a time when AI is fundamentally changing society, we lack a comprehensive understanding of what it means to be human. 

Looking to the future, there is also no consensus on how we will — and should — experience reality in the age of advanced artificial intelligence. 

We need to get to grips with the implications of sophisticated AI tools that have no concept of right or wrong, tools that malign actors can easily misuse. 

So how do we go about governing the use of AI so that it is aligned with human values? I believe that part of the answer lies in creating clear-cut regulations for AI developers, deployers and users. 

All parties need to be on the same page when it comes to the requirements and limits for the use of AI, and companies such as OpenAI and DeepMind have the responsibility to bring their products into public consciousness in a way that is controlled and responsible. 

Even Mira Murati, the Chief Technology Officer at OpenAI and the creator of ChatGPT, has called for more regulation of AI. 

If managed correctly, direct dialogue between policy-makers, regulators and AI companies will provide ethical safeguards without slowing innovation.

One thing is for sure: the future of AI should not be left in the hands of programmers and software engineers alone. 

In our search for answers, we need an alliance of experts from all fields

The philosopher, neuroscientist and AI ethics expert Professor Nayef Al-Rodhan makes a convincing case for a pioneering type of transdisciplinary inquiry — Neuro-Techno-Philosophy (NTP). 

NTP makes a case for creating an alliance of neuroscientists, philosophers, social scientists, AI experts and others to help understand how disruptive technologies will impact society and the global system. 

We would be wise to take note. 

Al-Rodhan, and other academics who connect the dots between (neuro)science, technology and philosophy, will be increasingly useful in helping humanity navigate the ethical and existential challenges created by these game-changing innovations and their potential impacts on consequential frontier risks and humanity’s futures.

In the not-too-distant future, we will see robots carry out tasks that go far beyond processing data and responding to instructions: a new generation of autonomous humanoids with unprecedented levels of sentience. 

Before this happens, we need to ensure that ethical and legal frameworks are in place to protect us from the dark sides of AI. 

Civilisational crossroads beckons

At present, we overestimate our capacity for control, and we often underestimate the risks. This is a dangerous approach, especially in an era of digital dependency. 

We find ourselves at a unique moment in time, a civilisational crossroads, where we still have the agency to shape society and our collective future. 

We have a small window of opportunity to future-proof emerging technologies, making sure that they are ultimately used in the service of humanity. 

Let’s not waste this opportunity.

Oliver Rolofs is a German security expert and the Co-Founder of the Munich Cyber Security Conference (MCSC). He was previously Head of Communications at the Munich Security Conference, where he established the Cybersecurity and Energy Security Programme.

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

Source link

#EUs #Act #vote #looms #free