The EU should make facial recognition history for the right reasons

The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

Whilst civil rights activists have long called for an outright ban, certain EU lawmakers may see the AI Act as an opportunity to claim that they are doing the (human) right(s) thing — and actually doing the opposite, Ella Jakubowska writes.

ADVERTISEMENT

In June 2023, the European Parliament made history when it voted in favour of a total ban on live facial recognition in public spaces. 

Through the new artificial intelligence bill, the EU could stop companies and authorities from treating us all as walking barcodes. But pressure from EU governments threatens to transform this possibility into the stuff of George Orwell’s nightmares.

Since late summer 2023, EU governments and parliamentarians have strived to reach an agreement on the Artificial Intelligence (AI) Act. This landmark law has promised that the EU will become a world leader in balancing AI innovation with protection — but the reality is less optimistic.

EU lawmakers are debating throughout November about how much leeway to give police to use public facial recognition. 

Despite these systems having been tied to human rights violations around the world, and recently condemned by Amnesty International for facilitating Israel’s system of oppression against Palestinians, the European Parliament’s commitment to a strong ban is at risk.

The commodification and abuse of our most sensitive data

In the last decade, information about every facet of our physical being — the faces, fingerprints, and eyes of practically every person worldwide — has become common currency.

This information — known as biometric data — is a mathematical representation of the most minute and intimate details and characteristics that make up who we are. From the vibrancy of human difference, we are collectively boiled down to a string of 1s and 0s.

Biometric data have been used in recent years to surveil and monitor people, from trying to suppress and scare pro-democracy protesters in Hong Kong and Russia to persecuting Black communities in the US. 

Even seemingly mundane uses, like in national identity documents, have in fact turned out to be great enablers for systems which scan people’s faces and bodies without due cause — a move that amounts to biometric mass surveillance.

Whilst the EU presents itself as a beacon of democracy and human rights, it has of course not been immune to practices which amount to biometric mass surveillance.

Transport hubs in Germany and Belgium, protesters in Austria, people going about their day in the Czech Republic, people sleeping rough in Italy and many, many more have all been subjected to public facial recognition surveillance. 

Most recently, France made its aspirations for biometric mass surveillance clear, passing a law to roll out automated surveillance systems for use at the upcoming Olympic and Paralympic games.

A wolf in sheep’s clothing

Human rights advocates have long argued that using people’s faces and bodies to identify and track them at scale already runs contrary to EU human rights and data protection law. 

Whether in live mode or used retrospectively, notoriously unreliable and discriminatory public facial recognition infringes massively on our human rights and essential dignity.

But the use of such systems — often under vague claims of “public safety” — is widespread, and legal protections against them are patchy and applied inconsistently.

The AI surveillance industry has all but told lawmakers: “For national security reasons, we cannot disclose evidence that these systems work, but we can assure you that they do”. What’s worse is that lawmakers seem to be taking their word for it.

Whilst civil rights activists have long called for an outright ban, certain EU lawmakers may see the AI Act as an opportunity to claim that they are doing the (human) right(s) thing — and actually doing the opposite.

According to media reports from Brussels, despite previous commitments to outlaw biometric mass surveillance practices, the European Parliament is now considering “narrow” exceptions to a ban on the live recognition of faces and other human features.

ADVERTISEMENT

Nothing but a full ban in the AI Act will do

But the crux of this issue is that you cannot allow just a little bit of biometric mass surveillance without opening the floodgates.

This is not hypothetical: using the purportedly “narrow” exceptions written into the draft AI Act by the EU’s executive arm, the government of Serbia has twice tried to legalise the roll-out of thousands of Huawei facial recognition-equipped cameras. 

If the EU AI Act permits exceptions that would allow EU countries to make use of untargeted public facial recognition, it will not be long until we are fighting off biometric mass surveillance laws in all twenty-seven EU countries.

One of the grounds for use that is reportedly being considered as an exception to the ban is the search for people suspected or convicted of serious crimes. But there is simply no way to do this without scanning the features of everyone in a public space, which research has proven has a severe chilling effect on democracies.

There are also major questions about necessity. If there really is an urgent situation, is a risky and unreliable technology like facial recognition actually going to help?

ADVERTISEMENT

Whilst the press in Brussels notes that the European Parliament would want safeguards to be added, there is little that these safeguards can do to stop people having to look over their shoulder everywhere they go. You cannot safeguard the violation of a fundamental human right.

Freedom or mass surveillance?

The EU is on the precipice of a huge achievement — an AI Act which truly puts people at its centre. 

But if done poorly, we will instead find ourselves on the precipice of a law which tells the whole world that the EU prioritises the surveillance industry over people and communities.

Authorities will be able to use the exceptions to public facial recognition to justify the near-permanent use of these systems. 

And the mass surveillance infrastructure — vast networks of public cameras and sensors propped up by powerful processors — will be ready and waiting for us at the press of a button. 

ADVERTISEMENT

There’s no need to smile for the camera any more — you’ll be captured whether you like it or not.

Ella Jakubowska is a Senior Policy Advisor at European Digital Rights (EDRi), a network collective of non-profit organisations, experts, advocates and academics working to defend and advance digital rights across the continent.

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

Source link

#facial #recognition #history #reasons

To protect our rights, the AI Act must include rule of law safeguards

By Eva Simon, Advocacy Lead for Tech & Rights, and Jonathan Day, Communications Manager, Civil Liberties Union For Europe

The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

AI is a part of everyday life in countless ways, and how we choose to regulate it will shape our societies. EU lawmakers must use this opportunity to craft a law that harnesses the opportunities without undermining the protection of our rights or the rule of law, Eva Simon and Jonathan Day write.

ADVERTISEMENT

The EU’s Artificial Intelligence Act — the world’s first comprehensive legal framework for AI — is in the final stages of negotiations before becoming law. 

Now, as the last details are being agreed, European lawmakers must seize the opportunity to safeguard human rights and firmly regulate the use of artificial intelligence.

Crucially, however, the debate around the AI Act has given insufficient attention to a key feature: the act must establish a clearly defined link between artificial intelligence and the rule of law. 

While the inclusion of human rights safeguards in the act has been discussed extensively, establishing a link to the rule of law is equally important. 

Democracy, human rights, and the rule of law are interconnected but still individual concepts that are dependent on each other and cannot be separated without inflicting damage to society.

An opportunity to strengthen the rule of law in Europe

The principle of the rule of law is fundamental to the EU. It is a precondition for the realisation of other fundamental values and for the enjoyment of human rights.

Notoriously hard to define, the rule of law nevertheless encompasses a set of values that are indispensable to a democratic society: a transparent and pluralistic lawmaking process; the separation of powers and checks and balances; independent, impartial courts and the ability to access them; and non-discrimination and equality before the law.

Given AI’s increasing integration into both the private and public sectors, we need robust safeguards to protect the very foundation our Union stands on: the misuse of AI systems poses a significant threat to the rule of law and democracy. 

In member states where these are teetering, regulatory loopholes could be exploited to weaken democratic institutions and processes and the rule of law. 

The AI Act is an opportunity to create a robust, secure regulatory environment founded upon fundamental rights — and rule of law-based standards and safeguards.

Proper oversight for AI used in justice systems

Central to these safeguards is the inclusion of mandatory fundamental rights impact assessments. 

They are included in the European Parliament’s version of the AI Act, and it is imperative that they make it into the final text of the act. 

These fundamental rights impact assessments are vital to ensure that AI technologies and their deployment uphold the principles of justice, accountability, and fairness. 

But going beyond, rule of law standards should be added to the impact assessments, with a structured framework to evaluate the potential risks, biases, and unintended consequences of AI deployment. 

Beyond the mere identification of potential risks, they can encompass mitigation strategies, periodic reviews, and updates.

This also allows for rule of law violations stemming from the use of AI to be addressed using all the means available to the EU — for example, when they occur in criminal justice systems, many of which use AI for automated decision-making processes to limit the burden and the time pressure on judges. 

But to ensure judicial independence, the right to a fair trial, and transparency, the AI used in justice systems must be subject to proper oversight and in line with the rule of law.

Risks of profiling and unlawful surveillance

Importantly, lawmakers should lay the foundation for proper rule of law protection in the AI Act by leaving out a blanket exemption for national security. 

ADVERTISEMENT

AI systems developed or used for national security purposes must fall within the scope of the act; otherwise, a member state could readily use them — such as for public surveillance or analysing human behaviour — simply by invoking the national security carve-out.

The Pegasus spyware scandal, in which journalists, human rights activists and politicians were surveilled by their own governments, demonstrates the clear need to ensure that systems developed or used for national security purposes are not exempted from the scope of the AI Act. 

Furthermore, national security can mean different things across the EU depending on the laws of the member states. 

Profiling citizens based on national governments’ interests would create inequality across the EU, posing an equal threat to both the rule of law and fundamental rights.

No blanket exceptions

With Polish and European Parliament elections upcoming, there is no question that AI can and will be used to target individuals with personalised messages, including to spread disinformation, with the potential of distorting otherwise fair elections. 

ADVERTISEMENT

On the other hand, AI tools will be deployed for fact-checking, blocking bots and content, and identifying troll farms as well. These techniques must be transparent to prevent misuse or abuse of power.

The need to explicitly link the rule of law within the AI Act is clear, as is the importance of mandating impact assessments that consider both fundamental rights and the rule of law — without a blanket exemption for national security uses. 

Artificial intelligence is a part of everyday life in countless ways, and how we choose to regulate it will shape our societies. 

EU lawmakers must use this opportunity to craft a law that harnesses the opportunities of AI without undermining the protection of our rights or the rule of law. 

Eva Simon serves as Advocacy Lead for Tech & Rights, and Jonathan Day is Communications Manager at the Civil Liberties Union For Europe, a Berlin-based campaign network to strengthen the rule of law in the European Union.

ADVERTISEMENT

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

Source link

#protect #rights #Act #include #rule #law #safeguards

Facial recognition technology should be regulated, but not banned

By Tony Porter, Chief Privacy Officer, Corsight AI, and Dr Nicole Benjamin Fink, Founder, Conservation Beyond Borders

The European Commission has proven itself to be an effective regulator in the past. A blanket ban on FRT in law enforcement will only benefit the criminals, Tony Porter and Dr Nicole Benjamin Fink write.

The EU’s AI Act passed a major hurdle in mid-June when the bloc’s lawmakers greenlit what will be the world’s first rules on artificial intelligence. 

ADVERTISEMENT

But one proposal stands apart: a total ban on facial recognition technology, or FRT. 

If left to stand, this rule will blindfold the law enforcers who do vital work to protect the most vulnerable in society. It will embolden criminal groups such as those who traffic wildlife and human victims, thereby putting lives at risk.

All surveillance capabilities intrude on human rights to some extent. The question is whether we can regulate the use of FRT effectively to mitigate any impact on these rights. 

Protecting privacy versus protecting people is a balance EU lawmakers can and must strike. A blanket ban is the easy, but not the responsible option.

Privacy concerns should face a reality check

MEPs voted overwhelmingly in favour of a ban on the use of live FRT in publicly accessible spaces, and a similar ban on the use of “after the event” FRT unless a judicial order is obtained. 

Now attention has shifted to no doubt heated trilogue negotiations between the European Parliament, European Council and member states.

FRT in essence uses cameras powered by AI algorithms to analyse a person’s facial features, potentially enabling authorities to match individuals against a database of pre-existing images, in order to identify them. 

Privacy campaigners have long argued that the potential benefits of using such tech are not worth the negative impact on human rights. But many of those arguments don’t stand up to scrutiny. in fact, they’re based on conclusively debunked myths.

The first is that the tech is inaccurate and that it disproportionately disadvantages people of colour. 

That may have been true of very early iterations of the technology, but it certainly isn’t today. Corsight has been benchmarked by the US National Institute of Standards and Technology (NIST) to an accuracy rate of 99.8%, for example. 

Separately, a 2020 NIST report claimed that FRT performs far more effectively across racial and other demographic groups than widely reported, with the most accurate technologies displaying “undetectable” differences between groups.

ADVERTISEMENT

It’s also falsely claimed that FRT is ineffective. In fact, Interpol said in 2021 that it had been able to identify almost 1,500 terrorists, criminals, fugitives, persons of interest and missing persons since 2016 using FRT. That figure is expected to have risen exponentially since.

A final myth, that FRT intrudes on human rights as enshrined by the European Convention of the same name, was effectively shot down by the Court of Appeal in London. In that 2020 case, judges ruled that scanning faces and instantly deleting the data if a match can’t be found has a negligible impact on human rights.

It’s about stopping the traffickers

On the other hand, if used in compliance with strict regulations, high-quality FRT has the capacity to save countless lives and protect people and communities from harm. 

Human trafficking is a trade in misery which enables sexual exploitation, forced labour and other heinous crimes. It’s estimated to affect tens of millions around the world, including children. 

But if facial images of known victims or traffickers are caught on camera, police could be alerted in real-time to step in. 

ADVERTISEMENT

Given that traffickers usually go to great lengths to hide their identity, and that victims — especially children — rarely possess official IDs, FRT offers a rare opportunity to make a difference.

Wildlife trafficking is similarly clandestine. It’s a global trade estimated many years ago at €20.9 billion — the world’s fourth biggest illegal activity behind arms, drugs and human trafficking. 

With much of the trade carried out by criminal syndicates online, there’s a potential evidence trail if investigators can match facial images of trafficked animals to images posted later to social media. 

Buyers can then be questioned as to whom they procured a particular animal from. Apps are already springing up to help track wildlife traffickers in this way.

There is a better way forward

Given what’s at stake here, European lawmakers should be thinking about ways to leverage a technology proven to help reduce societal harm — but in a way that mitigates risks to human rights. 

ADVERTISEMENT

The good news is that it can be done with the right regulatory guardrails. In fact, the EU’s AI Act already provides a great foundation for this, by proposing a standard of excellence for AI technologies which FRT could be held to.

Building on this, FRT should be retained as an operation tool wherever there’s a “substantial” risk to the public and a legitimate basis for protecting citizens from harm.

Its use should always be necessary and proportionate to that pressing need, and subject to a rigorous human rights assessment. 

Independent ethical and regulatory oversight must of course be applied, with a centralized supervisory authority put in place. And clear policies should be published setting out details of the proposed use. 

Impacted communities should be consulted and data published detailing the success or failure of deployments and human rights assessments.

The European Commission has proven itself to be an effective regulator in the past. So, let’s regulate FRT. A blanket ban will only benefit the criminals.

Tony Porter is the Chief Privacy Officer at Corsight AI and the former UK Surveillance Camera Commissioner, and Dr Nicole Benjamin Fink is the Founder of Conservation Beyond Borders.

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

Source link

#Facial #recognition #technology #regulated #banned

How should we regulate generative AI, and what will happen if we fail?

By Rohit Kapoor, Vice Chairman and CEO, EXL

Regulation must encourage collaboration and research between all the major players, from experts in the field to policy-makers and ethicists, Rohit Kapoor writes.

Generative AI is experiencing rapid growth and expansion. 

There’s no question as to whether this technology will change the world — all that remains to be seen is how long it will take for the transformative impact to be realised and how exactly it will manifest in each industry and niche. 

Whether it’s fully automated and targeted consumer marketing, medical reports generated and summarised for doctors, or chatbots with distinct personality types being tested by Instagram, generative AI is driving a revolution in just about every sector.

The potential benefits of these advancements are monumental. Quantifying the hype, a recent report by Bloomberg Intelligence predicted an explosion in generative AI market growth, from $40 billion (€36.5bn) in 2022 to 1.3 trillion (€1.18tn) in the next ten years. 

But in all the excitement to come, it’s absolutely critical that policy-makers and corporations alike do not lose sight of the risks of this technology.

These large language models, or LLMs, present dangers which not only threaten the very usefulness of the information they produce but could also prove threatening in entirely unintentional ways — from bias to blurring the lines between real and artificial to loss of control.

Who’s responsible?

The responsibility for taking the reins on regulation falls naturally with governments and regulatory bodies, but it should also extend beyond them. The business community must self-govern and contribute to principles that can become regulations while policy-makers deliberate.

Two core principles should be followed as soon as possible by those developing and running generative AI, in order to foster responsible use and mitigate negative impacts. 

First, large language models should only be applied to closed data sets to ensure safety and confidentiality. 

Second, all development and adoption of use cases leveraging generative AI should have the mandatory oversight of professionals to ensure “humans in the loop”.

These principles are essential for maintaining accountability, transparency, and fairness in the use of generative AI technologies.

From there, three main areas will need attention from a regulatory perspective.

Maintaining our grip on what’s real

The capabilities of generative AI to mimic reality are already quite astounding, and it’s improving all the time. 

So far this year, the internet has been awash with startling images like the Pope in a puffer jacket or the Mona Lisa as she would look in real life. 

And chatbots are being deployed in unexpected realms like dating apps — where the introduction of the technology is reportedly intended to reduce “small talk”.

The wider public should feel no guilt in enjoying these creative outputs, but industry players and policy-makers must be alive of the dangers of this mimicry. 

Amongst them are identity theft and reputational damage. 

Distinguishing between AI-generated content and content genuinely created by humans is a significant challenge, and regulation should consider the consequences and surveillance aspects of it.

Clear guidelines are needed to determine the responsibility of platforms and content creators to label AI-generated content. 

Robust verification systems like watermarking or digital signatures would support this authentication process.

Tackling imperfections that lead to bias

Policy-makers must set about regulating the monitoring and validation of imperfections in the data, algorithms and processes used in generative AI. 

Bias is a major factor. Training data can be biased or inadequate, resulting in a bias in the AI itself. 

For example, this might cause a company chatbot to deprioritise customer complaints that come from customers of a certain demographic or a search engine to throw up biased answers to queries. And biases in algorithms can perpetuate those unfair outcomes and discrimination.

Regulations need to force the issue of transparency and push for clear documentation of processes. This would help ensure that processes can be explained and that accountability is upheld. 

At the same time, it would enable scrutiny of generative AI systems, including safeguarding of intellectual property (IP) and data privacy — which, in a world where data is the new currency, is crucially important.

On top of this, regulating the documentation involved would help prevent “hallucinations” by AI — which are essentially where an AI gives a response that is not justified by the data used to train it.

Preventing the tech from becoming autonomous and uncontrollable

An area for special caution is the potential for an iterative process of AI creating subsequent generations of AI, eventually leading to AI that is misdirected or compounding errors. 

The progression from first-generation to second- and third-generation AI is expected to occur rapidly. 

The fundamental requirement of the self-declaration of AI models, where each model openly acknowledges its AI nature, is of utmost importance. 

However, enabling and regulating this self-declaration poses a significant practical challenge. One approach could involve mandating hardware and software companies to implement hardcoded restrictions, allowing only a certain threshold of AI functionality. 

Advanced functionality above such a threshold could be subject to an inspection of systems, audits, testing for compliance with safety standards, restrictions on degrees of deployment and levels of security, etc. Regulators should define and enforce these restrictions to mitigate risks.

We should be acting quickly and together

The world-changing potential of generative AI demands a coordinated response. 

If each country and jurisdiction develops its own rules, the adoption of the technology — which has the potential for enormous good in business, medicine, science and more — could be crippled. 

Regulation must encourage collaboration and research between all the major players, from experts in the field to policy-makers and ethicists. 

With a coordinated approach, the risks can be sensibly mitigated, and the full benefits of generative AI realised, unlocking its huge potential.

Rohit Kapoor is the Vice Chairman and CEO of EXL, a data analytics and digital operations and solutions company.

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

Source link

#regulate #generative #happen #fail

Retrospective facial recognition tech conceals human rights abuses

By Ella Jakubowska, EDRi, Hajira Maryam and Matt Mahmoudi, Amnesty International

Following the burglary of a French logistics company in 2019, facial recognition technology (FRT) was used on security camera footage of the incident in an attempt to identify the perpetrators. 

FRT works by attempting to match images from, for example, closed-circuit television (CCTV) cameras to databases of often millions of facial images, in many cases collected without knowledge and consent. 

In this case, the FRT system listed two hundred people as potential suspects. 

From this list, the police singled out ‘Mr H’ and charged him with the theft, despite a lack of physical evidence to connect him to the crime.

At his trial, the court refused a request from Mr H’s lawyer to share information about how the system compiled the list, which was at the heart of the decision to charge Mr H. 

The judge decided to rely on this notoriously discriminatory technology, sentencing Mr H to 18 months in prison.

Indicted by facial recognition

“Live” FRT is often the target of (well-earned) criticism, as the technology is used to track and monitor individuals in real time. 

However, the use of facial recognition technology retrospectively, after an incident has taken place, is less scrutinised despite being used in cases like Mr H’s. 

Retrospective FRT is made easier and more pervasive by the wide availability of security camera footage and the infrastructures already in place for the technique.

Now, as part of negotiations for a new law to regulate artificial intelligence (AI), the AI Act, EU governments are proposing to allow the routine use of retrospective facial recognition against the public at large — by police, local governments and even private companies.

The EU’s proposed AI Act is based on the premise that retrospective FRT is less harmful than its “live” iteration.

The EU executive has argued that the risks and harms can be mitigated with the extra time that retrospective processing affords.

This argument is wrong. Not only does the extra time fail to tackle the key issues — the destruction of anonymity and the suppression of rights and freedoms — but it also introduces additional problems.

‘Post’ RBI: The most dangerous surveillance measure you’ve never heard of?

Remote Biometric Identification, or RBI, is an umbrella term for systems like FRT that scan and identify people using their faces — or other body parts — at a distance. 

When used retrospectively, the EU’s proposed AI Act refers to it as “Post RBI”. Post RBI means that software could be used to identify people in a feed from public spaces hours, weeks, or even months after it was captured. 

For example, running FRT on protesters captured on CCTV cameras positioned. Or, as in the case of Mr H, to run CCTV footage against a government database of a staggering 8 million facial images.

The use of these systems produces a chilling effect in society; on how comfortable we feel attending a protest, seeking healthcare — such as abortion in places where it is criminalised — or speaking with a journalist.

Just knowing that retrospective FRT may be in use could make us afraid of how information about our personal lives could be used against us in the future.

FRT can feed racism, too

Research suggests that the application of FRT disproportionately affects racialised communities. 

Amnesty International has demonstrated that individuals living in areas at greater risk of racist stop-and-search policing — overwhelmingly affecting people of colour — are likely to be more exposed to more data harvesting and invasive facial recognition technology.

For example, Dwreck Ingram, a Black Lives Matter protest organiser from New York, was harassed by police forces at his apartment for four hours without a warrant or legitimate charge, simply because he had been identified by post RBI following his participation in a Black Lives Matter protest. 

Ingram ended up in a long legal battle to have false charges against him dropped after it became clear that the police had used this experimental technology on him.

The list goes on. Robert Williams, a resident of Detroit, was falsely arrested for theft committed by someone else. 

Randall Reid was sent to jail in Louisiana, a state he’d never visited because the police wrongly identified him as a suspect in a robbery with FRT. 

For racialised communities, in particular, the normalisation of facial recognition is the normalisation of their perpetual virtual line-up.

If you have an online presence, you’re probably already in FRT databases

This dystopian technology has also been used by football clubs in the Netherlands to scan for banned fans and wrongly issue a fine to a supporter who did not attend the match in question. 

Reportedly it has also been used by police in Austria against protesters and in France under the guise of making cities “safer” and more efficient, but in fact, increasing mass surveillance.

These technologies are often offered at low-to-no cost at all. 

One company offering such services is Clearview AI. The company has offered highly invasive facial recognition searches to thousands of law enforcement officers and agencies across Europe, the US and other regions. 

In Europe, national data protection authorities have taken a strong stance against these practices, with Italian and Greek regulators fining Clearview AI millions of euros for scraping the faces of EU citizens without legal basis. 

Swedish regulators fined the national police for unlawfully processing personal data when using Clearview AI to identify individuals.

AI Act could be a chance to end abuse of mass surveillance

Despite these promising moves to protect our human rights from retrospective facial recognition by data protection authorities, EU governments are now seeking to implement these dangerous practices regardless.

Biometric identification experiments in countries across the globe have shown us over and over again that these technologies, and the mass data collection it entails, erode the rights of the most marginalised people, including racialised communities, refugees, migrants and asylum seekers.

European countries have begun to legalise a range of biometric mass surveillance practices, threatening to normalise the use of these intrusive systems across the EU. 

This is why, more than ever, we need strong EU regulation that captures all forms of live and retrospective biometric mass surveillance in our communities and at EU borders, including stopping Post RBI in its tracks.

With the AI Act, the EU has a unique opportunity to put an end to rampant abuse facilitated by mass surveillance technologies. 

It must set a high standard for human rights safeguards for the use of emerging technologies, especially when these technologies amplify existing inequalities in society.

Ella Jakubowska is a Senior Policy Advisor at European Digital Rights (EDRi), a network collective of non-profit organisations, experts, advocates and academics working to defend and advance digital rights across the continent.

Hajira Maryam is a Media Manager, and Matt Mahmoudi is an AI and Human Rights Researcher at Amnesty Tech, a global collective of advocates, campaigners, hackers, researchers & technologists defending human rights in a digital age.

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

Source link

#Retrospective #facial #recognition #tech #conceals #human #rights #abuses

EU’s AI Act vote looms. We’re still not sure how free AI should be


The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

The European Union’s long-expected law on artificial intelligence (AI) is expected to be put to the vote at the European Parliament at the end of this month. 

But Europe’s efforts to regulate AI could be nipped in the bud as lawmakers struggle to agree on critical questions regarding AI definition, scope, and prohibited practices. 

Meanwhile, Microsoft’s decision this week to scrap its entire AI ethics team despite investing $11 billion (€10.3bn) into OpenAI raises questions about whether tech companies are genuinely committed to creating responsible safeguards for their AI products.

At the heart of the dispute around the EU’s AI Act is the need to provide fundamental rights, such as data privacy and democratic participation, without restricting innovation. 

How close are we to algocracy?

The advent of sophisticated AI platforms, including the launch of ChatGPT in November last year, has sparked a worldwide debate on AI systems. 

It has also forced governments, corporations and ordinary citizens to address some uncomfortable existential and philosophical questions. 

How close are we to becoming an _algocracy -_— a society ruled by algorithms? What rights will we be forced to forego? And how do we shield society from a future in which these technologies are used to cause harm? 

The sooner we can answer these and other similar questions, the better prepared we will be to reap the benefits of these disruptive technologies — but also steel ourselves against the dangers that accompany them.

The promise of technological innovation has taken a major leap forward with the arrival of new generative AI platforms, such as ChatGPT and DALL-E 2, which can create words, art and music with a set of simple instructions and provide human-like responses to complex questions.

These tools could be harnessed as a power for good, but the recent news that ChatGPT passed a US medical-licensing exam and a Wharton Business School MBA exam is a reminder of the looming operational and ethical challenges. 

Academic institutions, policy-makers and society at large are still scrambling to catch up.

ChatGPT passed the Turing Test — and it’s still in its adolescence

Developed in the 1950s, the so-called Turing Test has long been the line in the sand for AI. 

The test was used to determine whether a computer is capable of thinking like a human being. 

Mathematician and code-breaker Alan Turing was convinced that one day a human would be unable to distinguish between answers given by a real person and a machine. 

He was right — that day has come. In recent years, disruptive technologies have advanced beyond all recognition. 

AI technologies and advanced machine-learning chatbots are still in their adolescence, they need more time to bloom. 

But they give us a valuable glimpse of the future, even if these glimpses are sometimes a bit blurred. 

The optimists among us are quick to point to the enormous potential for good presented by these technologies: from improving medical research and developing new drugs and vaccines to revolutionising the fields of education, defence, law enforcement, logistics, manufacturing, and more. 

However, international organisations such as the EU Fundamental Rights Agency and the UN High Commissioner for Human Rights have been right to warn that these systems can often not work as intended. 

A case in point is the Dutch tax authority’s SyRI system which used an algorithm to spot suspected benefits fraud in breach of the European Convention on Human Rights.

How to regulate without slowing down innovation?

At a time when AI is fundamentally changing society, we lack a comprehensive understanding of what it means to be human. 

Looking to the future, there is also no consensus on how we will — and should — experience reality in the age of advanced artificial intelligence. 

We need to get to grips with the implications of sophisticated AI tools that have no concept of right or wrong, tools that malign actors can easily misuse. 

So how do we go about governing the use of AI so that it is aligned with human values? I believe that part of the answer lies in creating clear-cut regulations for AI developers, deployers and users. 

All parties need to be on the same page when it comes to the requirements and limits for the use of AI, and companies such as OpenAI and DeepMind have the responsibility to bring their products into public consciousness in a way that is controlled and responsible. 

Even Mira Murati, the Chief Technology Officer at OpenAI and the creator of ChatGPT, has called for more regulation of AI. 

If managed correctly, direct dialogue between policy-makers, regulators and AI companies will provide ethical safeguards without slowing innovation.

One thing is for sure: the future of AI should not be left in the hands of programmers and software engineers alone. 

In our search for answers, we need an alliance of experts from all fields

The philosopher, neuroscientist and AI ethics expert Professor Nayef Al-Rodhan makes a convincing case for a pioneering type of transdisciplinary inquiry — Neuro-Techno-Philosophy (NTP). 

NTP makes a case for creating an alliance of neuroscientists, philosophers, social scientists, AI experts and others to help understand how disruptive technologies will impact society and the global system. 

We would be wise to take note. 

Al-Rodhan, and other academics who connect the dots between (neuro)science, technology and philosophy, will be increasingly useful in helping humanity navigate the ethical and existential challenges created by these game-changing innovations and their potential impacts on consequential frontier risks and humanity’s futures.

In the not-too-distant future, we will see robots carry out tasks that go far beyond processing data and responding to instructions: a new generation of autonomous humanoids with unprecedented levels of sentience. 

Before this happens, we need to ensure that ethical and legal frameworks are in place to protect us from the dark sides of AI. 

Civilisational crossroads beckons

At present, we overestimate our capacity for control, and we often underestimate the risks. This is a dangerous approach, especially in an era of digital dependency. 

We find ourselves at a unique moment in time, a civilisational crossroads, where we still have the agency to shape society and our collective future. 

We have a small window of opportunity to future-proof emerging technologies, making sure that they are ultimately used in the service of humanity. 

Let’s not waste this opportunity.

Oliver Rolofs is a German security expert and the Co-Founder of the Munich Cyber Security Conference (MCSC). He was previously Head of Communications at the Munich Security Conference, where he established the Cybersecurity and Energy Security Programme.

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

Source link

#EUs #Act #vote #looms #free