EU’s AI Act vote looms. We’re still not sure how free AI should be


The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

The European Union’s long-expected law on artificial intelligence (AI) is expected to be put to the vote at the European Parliament at the end of this month.

But Europe’s efforts to regulate AI could be nipped in the bud as lawmakers struggle to agree on critical questions regarding AI definition, scope, and prohibited practices.

Meanwhile, Microsoft’s decision this week to scrap its entire AI ethics team despite investing $11 billion (€10.3bn) into OpenAI raises questions about whether tech companies are genuinely committed to creating responsible safeguards for their AI products.

At the heart of the dispute around the EU’s AI Act is the need to provide fundamental rights, such as data privacy and democratic participation, without restricting innovation.

How close are we to algocracy?

The advent of sophisticated AI platforms, including the launch of ChatGPT in November last year, has sparked a worldwide debate on AI systems.

It has also forced governments, corporations and ordinary citizens to address some uncomfortable existential and philosophical questions.

How close are we to becoming an _algocracy -_— a society ruled by algorithms? What rights will we be forced to forego? And how do we shield society from a future in which these technologies are used to cause harm?

The sooner we can answer these and other similar questions, the better prepared we will be to reap the benefits of these disruptive technologies — but also steel ourselves against the dangers that accompany them.

The promise of technological innovation has taken a major leap forward with the arrival of new generative AI platforms, such as ChatGPT and DALL-E 2, which can create words, art and music with a set of simple instructions and provide human-like responses to complex questions.

These tools could be harnessed as a power for good, but the recent news that ChatGPT passed a US medical-licensing exam and a Wharton Business School MBA exam is a reminder of the looming operational and ethical challenges.

Academic institutions, policy-makers and society at large are still scrambling to catch up.

ChatGPT passed the Turing Test — and it’s still in its adolescence

Developed in the 1950s, the so-called Turing Test has long been the line in the sand for AI.

The test was used to determine whether a computer is capable of thinking like a human being.

Mathematician and code-breaker Alan Turing was convinced that one day a human would be unable to distinguish between answers given by a real person and a machine.

He was right — that day has come. In recent years, disruptive technologies have advanced beyond all recognition.

AI technologies and advanced machine-learning chatbots are still in their adolescence, they need more time to bloom.

But they give us a valuable glimpse of the future, even if these glimpses are sometimes a bit blurred.

The optimists among us are quick to point to the enormous potential for good presented by these technologies: from improving medical research and developing new drugs and vaccines to revolutionising the fields of education, defence, law enforcement, logistics, manufacturing, and more.

However, international organisations such as the EU Fundamental Rights Agency and the UN High Commissioner for Human Rights have been right to warn that these systems can often not work as intended.

A case in point is the Dutch tax authority’s SyRI system which used an algorithm to spot suspected benefits fraud in breach of the European Convention on Human Rights.

How to regulate without slowing down innovation?

At a time when AI is fundamentally changing society, we lack a comprehensive understanding of what it means to be human.

Looking to the future, there is also no consensus on how we will — and should — experience reality in the age of advanced artificial intelligence.

We need to get to grips with the implications of sophisticated AI tools that have no concept of right or wrong, tools that malign actors can easily misuse.

So how do we go about governing the use of AI so that it is aligned with human values? I believe that part of the answer lies in creating clear-cut regulations for AI developers, deployers and users.

All parties need to be on the same page when it comes to the requirements and limits for the use of AI, and companies such as OpenAI and DeepMind have the responsibility to bring their products into public consciousness in a way that is controlled and responsible.

Even Mira Murati, the Chief Technology Officer at OpenAI and the creator of ChatGPT, has called for more regulation of AI.

If managed correctly, direct dialogue between policy-makers, regulators and AI companies will provide ethical safeguards without slowing innovation.

One thing is for sure: the future of AI should not be left in the hands of programmers and software engineers alone.

In our search for answers, we need an alliance of experts from all fields

The philosopher, neuroscientist and AI ethics expert Professor Nayef Al-Rodhan makes a convincing case for a pioneering type of transdisciplinary inquiry — Neuro-Techno-Philosophy (NTP).

NTP makes a case for creating an alliance of neuroscientists, philosophers, social scientists, AI experts and others to help understand how disruptive technologies will impact society and the global system.

We would be wise to take note.

Al-Rodhan, and other academics who connect the dots between (neuro)science, technology and philosophy, will be increasingly useful in helping humanity navigate the ethical and existential challenges created by these game-changing innovations and their potential impacts on consequential frontier risks and humanity’s futures.

In the not-too-distant future, we will see robots carry out tasks that go far beyond processing data and responding to instructions: a new generation of autonomous humanoids with unprecedented levels of sentience.

Before this happens, we need to ensure that ethical and legal frameworks are in place to protect us from the dark sides of AI.

Civilisational crossroads beckons

At present, we overestimate our capacity for control, and we often underestimate the risks. This is a dangerous approach, especially in an era of digital dependency.

We find ourselves at a unique moment in time, a civilisational crossroads, where we still have the agency to shape society and our collective future.

We have a small window of opportunity to future-proof emerging technologies, making sure that they are ultimately used in the service of humanity.

Let’s not waste this opportunity.

Oliver Rolofs is a German security expert and the Co-Founder of the Munich Cyber Security Conference (MCSC). He was previously Head of Communications at the Munich Security Conference, where he established the Cybersecurity and Energy Security Programme.

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

Source link

#EUs #Act #vote #looms #free

Are programmes like ChatGPT bringing useful change or unknown chaos?


The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

Since ChatGPT exploded onto the scene in November 2022, many have contemplated how it might transform life, jobs and education as we know it for better or worse.

Many, including us, are excited by the benefits that digital technology can bring to consumers.

However, our experience in testing digital products and services, shaping digital policy, and diving into consumers’ perspectives on IoT, AI, data and platforms, means the eyes of all experts are wide open to also the challenges of disruptive digitalisation.

After all, consumers should be able to use the best technology in the way they want to and not have to compromise on safety and trust.

What’s in it for consumers?

There’s plenty of positive potential in new, generative technologies like ChatGPT, including producing written content, creating training materials for medical students or writing and debugging code.

We’ve already seen people innovate consumer tasks with ChatGPT — for example, using it to write a successful parking fine appeal.

And when asked, ChatGPT had its own ideas of what it could do for consumers.

“I can help compare prices and specifications of different products, answer questions about product maintenance and warranties, and provide information on return and exchange policies…. I can also help consumers understand technical terms and product specifications, making it easier for them to make informed decisions,” it told us when we asked the question.

Looking at this, it might make you wonder if this level of service from a machine might lead to experts in all fields, including ours, becoming obsolete.

However, the rollout of ChatGPT and similar technologies has shown it still has a problem with accuracy which is, in turn, a problem for its users.

The search for truth

Let’s start by looking at the challenge of accuracy and truth in a large language model like ChatGPT.

ChatGPT has started to disrupt internet search through a rollout of the technology in Microsoft’s Bing search engine.

With ChatGPT-enabled search, results appear not as a list of links but as a neat summary of the information within the links, presented in a conversational style.

The answers can be finessed through more questions, just as if you were chatting to a friend or advisor.

This could be really helpful for a request like “can you show me the most lightweight tent that would fit into a 20-litre bike pannier”.

Results like these would be easy to verify, and perhaps more crucially, if they turn out to be wrong, they would not pose a major risk to a person.

However, it’s a different story when the information that is “wrong” or “inaccurate” carries a material risk of harm — for example, health or financial advice or deliberate misinformation that could cause wider social problems.

It’s convincing, but is it reliable?

The problem is that technologies like ChatGPT are very good at writing convincing answers.

But OpenAI have been clear that ChatGPT has not been designed to write text that is true.

It is trained to predict the next word and create answers that sound highly plausible — which means that a misleading or untrue answer could look just as convincing as a reliable, true one.

The speedy delivery of convincing, plausible untruths through tools like ChatGPT becomes a critical problem in the hands of users whose sole purpose is to mislead, deceive and defraud.

Large language models like ChatGPT can be trained to learn different tones and styles, which makes them ripe for exploitation.

Convincing phishing emails suddenly become much easier to compose, and persuasive but misleading visuals are quicker to create.

Scams and frauds could become ever more sophisticated and disinformation ever harder to distinguish. Both could become immune to the defences we have built up.

We need to learn how to get the best of ChatGPT

Even in focusing on just one aspect of ChatGPT, those of us involved in protecting consumers in Europe and worldwide have examined multiple layers of different consequences that this advanced technology could create once it reaches users’ hands.

People in our field are indeed already working together with businesses, digital rights groups and research centres to start to unpick the complexities of such a disruptive technology.

OpenAI have put safeguards around the use of the technology, but other rollouts of similar products may not.

Strong, future-focused governance and rules are needed to make sure that consumers can make the most of the technology with confidence.

As the AI Act develops, Euroconsumers’ organisations are working closely with BEUC to secure consumer rights to privacy, safety and fairness in the legislative frameworks.

In the future, we will be ready to defend consumers in court for wrongdoing caused by AI systems.

True innovation still has human interests at its core

However, there are plenty of reasons to look at the tools of the future, like ChatGPT, with optimism.

We believe that innovation can be a lever of social and economic development by shaping markets that work better for consumers.

However, true innovation needs everyone’s input and only happens when tangible benefits are felt in the lives of as many people as possible.

But we are only at the beginning of what is turning out to be an intriguing experience with these interactive, generative technologies.

It may be too early for a definitive last word, but one thing is absolutely sure: despite — and perhaps even because of ChatGPT — there will still be plenty of need for consumer protection by actual humans.

Marco Pierani is the Director of Public Affairs and Media Relations, and Els Bruggeman serves as Head of Advocacy and Enforcement at Euroconsumers, a group of five consumer organisations in Belgium, Italy, Brazil, Spain and Portugal.

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

Source link

#programmes #ChatGPT #bringing #change #unknown #chaos

I asked ChatGPT to help me plan a vacation. Here’s what happened next

Some people love travel planning.

But I am not one of those people.

So the idea that artificial intelligence chatbots, such as ChatGPT and Bing, can research travel destinations and create itineraries is intriguing.

But I’m skeptical too.

Do recommendations just scratch the surface — for example, suggesting that I see the Eiffel Tower in Paris? Or can they recommend lesser-known restaurants and handle specific hotel requests too?

The answer is: yes and no — at least for ChatGPT.

Unfortunately, I couldn’t test Bing. When I tried to access it, I was put on a waiting list. The website said I could “get ahead in the line” if I set Microsoft defaults on my computer and scanned a QR code to install the Bing app. I did both. I’m still waiting.

ChatGPT was easier. I went to the developer’s website, clicked on the word “ChatGPT,” registered for an account — and started chatting.

Sandat Glamping Tents had a 4.9/5 rating on Google (actual: 4.5/5) and “over 400 excellent reviews” on Tripadvisor (actual: 277 reviews).

But perhaps the biggest blunder: Free Spirit Spheres is actually in Canada, which the bot acknowledged when pressed.

ChatGPT is nothing if not apologetic.

Cost estimates for each hotel were more accurate. But ChatGPT couldn’t show photographs of the hotels or help book them — although it did provide ample instructions on how to do both.

By road or by rail?

Flights

ChatGPT can name airlines that connect cities, but it can’t give current flight information or help book flights.

It wasn’t able to tell me the cheapest fare — or any fare — from London to New York this spring because it doesn’t “have access to real-time pricing information,” it said.

In fact, ChatGPT data ends at September 2021; it doesn’t “know” anything that’s happened since.

However, the bot could answer which month the London-to-New York route is usually the cheapest, which it said is “January and February, or during the shoulder season months of March and November.”

As for the best airline in the world, it said: “As an AI language model, I cannot have personal preferences or opinions.” But it went on to name the top five airlines named to Skytrax’s “World’s Top 100 Airlines” in 2021.

The list wasn’t correct.

The list provided by ChatGPT appears to be Skytrax’s airline ranking from 2019 instead.

ChatGPT says it’s at Proud Mary — a coffee shop that tops many “best of” lists today.

error code 1020” message.

This error may be caused by overloaded servers or by exceeding the daily limit, according to the tech website Stealth Optional. Either way, all of my previous chats were inaccessible, a huge negative for travelers in the middle of the planning process.

A new window didn’t fix the problem, but opening one in “incognito mode” did. Once in, I clicked on “Upgrade to Plus,” which showed that the free plan is available when demand is low, but for $20 per month, the “Plus plan” gives access to ChatGPT all the time, faster responses and priority to use new features.

With access again, I quickly asked about wait times on Disney World rides, a subject which I had spoken to luxury travel advisor Jonathan Alder of Jonathan’s Travels about last week. Alder lives close to the park and has lost count of how many times he’s visited, he said. Yet, only one of their answers — Epcot’s “Frozen Ever After” — overlapped.

ChatGPT mentioned that FastPass and Genie+ can reduce wait times at Disney World, which is partly right. The company phased out its “skip the line” virtual queue FastPass program when it introduced Genie+ in the fall of 2021.

The takeaway