Israel’s appetite for high-tech weapons highlights a Biden policy gap

Within hours of the Hamas attack on Israel last month, a Silicon Valley drone company called Skydio began receiving emails from the Israeli military. The requests were for the company’s short-range reconnaissance drones — small flying vehicles used by the U.S. Army to navigate obstacles autonomously and produce 3D scans of complex structures like buildings.

The company said yes. In the three weeks since the attack, Skydio has sent more than 100 drones to the Israeli Defense Forces, with more to come, according to Mark Valentine, the Skydio executive in charge of government contracts.

Skydio isn’t the only American tech company fielding orders. Israel’s ferocious campaign to eliminate Hamas from the Gaza Strip is creating new demand for cutting-edge defense technology — often supplied directly by newer, smaller manufacturers, outside the traditional nation-to-nation negotiations for military supplies.

Already, Israel is using self-piloting drones from Shield AI for close-quarters indoor combat and has reportedly requested 200 Switchblade 600 kamikaze drones from another U.S. company, according to DefenseScoop. Jon Gruen, CEO of Fortem Technologies, which supplied Ukrainian forces with radar and autonomous anti-drone aircraft, said he was having “early-stage conversations” with Israelis about whether the company’s AI systems could work in the dense, urban environments in Gaza.

This surge of interest echoes the one driven by the even larger conflict in Ukraine, which has been a proving ground for new AI-powered defense technology — much of it ordered by the Ukrainian government directly from U.S. tech companies.

AI ethicists have raised concerns about the Israeli military’s use of AI-driven technologies to target Palestinians, pointing to reports that the army used AI to strike more than 11,000 targets in Gaza since Hamas militants launched a deadly assault on Israel on Oct 7.

The Israeli defense ministry did not elaborate in response to questions about its use of AI.

These sophisticated platforms also pose a new challenge for the Biden administration. On Nov. 13, the U.S. began implementing a new foreign policy to govern the responsible military use of such technologies. The policy, first unveiled in the Hague in February and endorsed by 45 other countries, is an effort to keep the military use of AI and autonomous systems within the international law of war.

But neither Israel nor Ukraine are signatories, leaving a growing hole in the young effort to keep high-tech weapons operating within agreed-upon lines.

Asked about Israel’s compliance with the U.S.-led declaration on military AI, a spokesperson for the State Department said “it is too early” to draw conclusions about why some countries have not endorsed the document, or to suggest that non-endorsing countries disagree with the declaration or will not adhere to its principles.

Mark Cancian, a senior adviser with the CSIS International Security Program, said in an interview that “it’s very difficult” to coordinate international agreement between nations on the military use of AI for two reasons: “One is that the technology is evolving so quickly that the description constraints you put on it today may no longer may not be relevant five years from now because the technology will be so different. The other thing is that so much of this technology is civilian, that it’s hard to restrict military development without also affecting civilian development.”

In Gaza, drones are being largely used for surveillance, scouting locations and looking for militants without risking soldiers’ lives, according to Israeli and U.S. military technology developers and observers interviewed for this story.

Israel discloses few specifics of how it uses this technology, and some worry the Israeli military is using unreliable AI recommendation systems to identify targets for lethal operations.

Ukrainian forces have used experimental AI systems to identify Russian soldiers, weapons and unit positions from social media and satellite feeds.

Observers say that Israel is a particularly fast-moving theater for new weaponry because it has a technically sophisticated military, large budget, and — crucially — close existing ties to the U.S. tech industry.

“The difference, now maybe more than ever, is the speed at which technology can move and the willingness of suppliers of that technology to deal directly with Israel,” said Arun Seraphin, executive director of the National Defense Industrial Association’s Institute for Emerging Technologies.

Though the weapons trade is subject to scrutiny and regulation, autonomous systems also raise special challenges. Unlike traditional military hardware, buyers are able to reconfigure these smart platforms for their own needs, adding a layer of inscrutability to how these systems are used.

While many of the U.S.-built, AI-enabled drones sent to Israel are not armed and not programmed by the manufacturers to identify specific vehicles or people, these airborne robots are designed to leave room for military customers to run their own custom software, which they often prefer to do, multiple manufacturers told POLITICO.

Shield AI co-founder Brandon Tseng confirmed that users are able to customize the Nova 2 drones that the IDF is using to search for barricaded shooters and civilians in buildings targeted by Hamas fighters.

Matt Mahmoudi, who authored Amnesty International’s May report documenting Israel’s use of facial recognition systems in Palestinian territories, told POLITICO that historically, U.S. technology companies contracting with Israeli defense authorities have had little insight or control over how their products are used by the Israeli government, pointing to several instances of the Israeli military running its own AI software on hardware imported from other countries to closely monitor the movement of Palestinians.

Complicating the issue are the blurred lines between military and non-military technology. In the industry, the term is “dual-use” — a system, like a drone-swarm equipped with computer-vision, that might be used for commercial purposes but could also be deployed in combat.

The Technology Policy Lab at the Center for a New American Security writes that “dual-use technologies are more difficult to regulate at both the national and international levels” and notes that in order for the U.S. to best apply export controls, it “requires complementary commitment from technology-leading allies and partners.”

Exportable military-use AI systems can run the gamut from commercial products to autonomous weapons. Even in cases where AI-enabled systems are explicitly designed as weapons, meaning U.S. authorities are required by law to monitor the transfer of these systems to another country, the State Department only recently adopted policies to monitor civilian harm caused by these weapons, in response to Congressional pressure.

But enforcement is still a question mark: Josh Paul, a former State Department official, wrote that a planned report on the policy’s implementation was canceled because the department wanted to avoid any debate on civilian harm risks in Gaza from U.S. weapons transfers to Israel.

A Skydio spokesperson said the company is currently not aware of any users breaching its code of conduct and would “take appropriate measures” to mitigate the misuse of its drones. A Shield AI spokesperson said the company is confident its products are not being used to violate humanitarian norms in Israel and “would not support” the unethical use of its products.

In response to queries about whether the U.S. government is able to closely monitor high-tech defense platforms sent by smaller companies to Israel or Ukraine, a spokesperson for the U.S. State Department said it was restricted from publicly commenting or confirming the details of commercially licensed defense trade activity.

Some observers point out that the Pentagon derives some benefit from watching new systems tested elsewhere.

“The great value for the United States is we’re getting to field test all this new stuff,” said CSIS’s Cancian — a process that takes much longer in peacetime environments and allows the Pentagon to place its bets on novel technologies with more confidence, he added.

Source link

#Israels #appetite #hightech #weapons #highlights #Biden #policy #gap

The EU should make facial recognition history for the right reasons

The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

Whilst civil rights activists have long called for an outright ban, certain EU lawmakers may see the AI Act as an opportunity to claim that they are doing the (human) right(s) thing — and actually doing the opposite, Ella Jakubowska writes.


In June 2023, the European Parliament made history when it voted in favour of a total ban on live facial recognition in public spaces. 

Through the new artificial intelligence bill, the EU could stop companies and authorities from treating us all as walking barcodes. But pressure from EU governments threatens to transform this possibility into the stuff of George Orwell’s nightmares.

Since late summer 2023, EU governments and parliamentarians have strived to reach an agreement on the Artificial Intelligence (AI) Act. This landmark law has promised that the EU will become a world leader in balancing AI innovation with protection — but the reality is less optimistic.

EU lawmakers are debating throughout November about how much leeway to give police to use public facial recognition. 

Despite these systems having been tied to human rights violations around the world, and recently condemned by Amnesty International for facilitating Israel’s system of oppression against Palestinians, the European Parliament’s commitment to a strong ban is at risk.

The commodification and abuse of our most sensitive data

In the last decade, information about every facet of our physical being — the faces, fingerprints, and eyes of practically every person worldwide — has become common currency.

This information — known as biometric data — is a mathematical representation of the most minute and intimate details and characteristics that make up who we are. From the vibrancy of human difference, we are collectively boiled down to a string of 1s and 0s.

Biometric data have been used in recent years to surveil and monitor people, from trying to suppress and scare pro-democracy protesters in Hong Kong and Russia to persecuting Black communities in the US. 

Even seemingly mundane uses, like in national identity documents, have in fact turned out to be great enablers for systems which scan people’s faces and bodies without due cause — a move that amounts to biometric mass surveillance.

Whilst the EU presents itself as a beacon of democracy and human rights, it has of course not been immune to practices which amount to biometric mass surveillance.

Transport hubs in Germany and Belgium, protesters in Austria, people going about their day in the Czech Republic, people sleeping rough in Italy and many, many more have all been subjected to public facial recognition surveillance. 

Most recently, France made its aspirations for biometric mass surveillance clear, passing a law to roll out automated surveillance systems for use at the upcoming Olympic and Paralympic games.

A wolf in sheep’s clothing

Human rights advocates have long argued that using people’s faces and bodies to identify and track them at scale already runs contrary to EU human rights and data protection law. 

Whether in live mode or used retrospectively, notoriously unreliable and discriminatory public facial recognition infringes massively on our human rights and essential dignity.

But the use of such systems — often under vague claims of “public safety” — is widespread, and legal protections against them are patchy and applied inconsistently.

The AI surveillance industry has all but told lawmakers: “For national security reasons, we cannot disclose evidence that these systems work, but we can assure you that they do”. What’s worse is that lawmakers seem to be taking their word for it.

Whilst civil rights activists have long called for an outright ban, certain EU lawmakers may see the AI Act as an opportunity to claim that they are doing the (human) right(s) thing — and actually doing the opposite.

According to media reports from Brussels, despite previous commitments to outlaw biometric mass surveillance practices, the European Parliament is now considering “narrow” exceptions to a ban on the live recognition of faces and other human features.


Nothing but a full ban in the AI Act will do

But the crux of this issue is that you cannot allow just a little bit of biometric mass surveillance without opening the floodgates.

This is not hypothetical: using the purportedly “narrow” exceptions written into the draft AI Act by the EU’s executive arm, the government of Serbia has twice tried to legalise the roll-out of thousands of Huawei facial recognition-equipped cameras. 

If the EU AI Act permits exceptions that would allow EU countries to make use of untargeted public facial recognition, it will not be long until we are fighting off biometric mass surveillance laws in all twenty-seven EU countries.

One of the grounds for use that is reportedly being considered as an exception to the ban is the search for people suspected or convicted of serious crimes. But there is simply no way to do this without scanning the features of everyone in a public space, which research has proven has a severe chilling effect on democracies.

There are also major questions about necessity. If there really is an urgent situation, is a risky and unreliable technology like facial recognition actually going to help?


Whilst the press in Brussels notes that the European Parliament would want safeguards to be added, there is little that these safeguards can do to stop people having to look over their shoulder everywhere they go. You cannot safeguard the violation of a fundamental human right.

Freedom or mass surveillance?

The EU is on the precipice of a huge achievement — an AI Act which truly puts people at its centre. 

But if done poorly, we will instead find ourselves on the precipice of a law which tells the whole world that the EU prioritises the surveillance industry over people and communities.

Authorities will be able to use the exceptions to public facial recognition to justify the near-permanent use of these systems. 

And the mass surveillance infrastructure — vast networks of public cameras and sensors propped up by powerful processors — will be ready and waiting for us at the press of a button. 


There’s no need to smile for the camera any more — you’ll be captured whether you like it or not.

Ella Jakubowska is a Senior Policy Advisor at European Digital Rights (EDRi), a network collective of non-profit organisations, experts, advocates and academics working to defend and advance digital rights across the continent.

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

Source link

#facial #recognition #history #reasons

Facial recognition technology should be regulated, but not banned

By Tony Porter, Chief Privacy Officer, Corsight AI, and Dr Nicole Benjamin Fink, Founder, Conservation Beyond Borders

The European Commission has proven itself to be an effective regulator in the past. A blanket ban on FRT in law enforcement will only benefit the criminals, Tony Porter and Dr Nicole Benjamin Fink write.

The EU’s AI Act passed a major hurdle in mid-June when the bloc’s lawmakers greenlit what will be the world’s first rules on artificial intelligence. 


But one proposal stands apart: a total ban on facial recognition technology, or FRT. 

If left to stand, this rule will blindfold the law enforcers who do vital work to protect the most vulnerable in society. It will embolden criminal groups such as those who traffic wildlife and human victims, thereby putting lives at risk.

All surveillance capabilities intrude on human rights to some extent. The question is whether we can regulate the use of FRT effectively to mitigate any impact on these rights. 

Protecting privacy versus protecting people is a balance EU lawmakers can and must strike. A blanket ban is the easy, but not the responsible option.

Privacy concerns should face a reality check

MEPs voted overwhelmingly in favour of a ban on the use of live FRT in publicly accessible spaces, and a similar ban on the use of “after the event” FRT unless a judicial order is obtained. 

Now attention has shifted to no doubt heated trilogue negotiations between the European Parliament, European Council and member states.

FRT in essence uses cameras powered by AI algorithms to analyse a person’s facial features, potentially enabling authorities to match individuals against a database of pre-existing images, in order to identify them. 

Privacy campaigners have long argued that the potential benefits of using such tech are not worth the negative impact on human rights. But many of those arguments don’t stand up to scrutiny. in fact, they’re based on conclusively debunked myths.

The first is that the tech is inaccurate and that it disproportionately disadvantages people of colour. 

That may have been true of very early iterations of the technology, but it certainly isn’t today. Corsight has been benchmarked by the US National Institute of Standards and Technology (NIST) to an accuracy rate of 99.8%, for example. 

Separately, a 2020 NIST report claimed that FRT performs far more effectively across racial and other demographic groups than widely reported, with the most accurate technologies displaying “undetectable” differences between groups.


It’s also falsely claimed that FRT is ineffective. In fact, Interpol said in 2021 that it had been able to identify almost 1,500 terrorists, criminals, fugitives, persons of interest and missing persons since 2016 using FRT. That figure is expected to have risen exponentially since.

A final myth, that FRT intrudes on human rights as enshrined by the European Convention of the same name, was effectively shot down by the Court of Appeal in London. In that 2020 case, judges ruled that scanning faces and instantly deleting the data if a match can’t be found has a negligible impact on human rights.

It’s about stopping the traffickers

On the other hand, if used in compliance with strict regulations, high-quality FRT has the capacity to save countless lives and protect people and communities from harm. 

Human trafficking is a trade in misery which enables sexual exploitation, forced labour and other heinous crimes. It’s estimated to affect tens of millions around the world, including children. 

But if facial images of known victims or traffickers are caught on camera, police could be alerted in real-time to step in. 


Given that traffickers usually go to great lengths to hide their identity, and that victims — especially children — rarely possess official IDs, FRT offers a rare opportunity to make a difference.

Wildlife trafficking is similarly clandestine. It’s a global trade estimated many years ago at €20.9 billion — the world’s fourth biggest illegal activity behind arms, drugs and human trafficking. 

With much of the trade carried out by criminal syndicates online, there’s a potential evidence trail if investigators can match facial images of trafficked animals to images posted later to social media. 

Buyers can then be questioned as to whom they procured a particular animal from. Apps are already springing up to help track wildlife traffickers in this way.

There is a better way forward

Given what’s at stake here, European lawmakers should be thinking about ways to leverage a technology proven to help reduce societal harm — but in a way that mitigates risks to human rights. 


The good news is that it can be done with the right regulatory guardrails. In fact, the EU’s AI Act already provides a great foundation for this, by proposing a standard of excellence for AI technologies which FRT could be held to.

Building on this, FRT should be retained as an operation tool wherever there’s a “substantial” risk to the public and a legitimate basis for protecting citizens from harm.

Its use should always be necessary and proportionate to that pressing need, and subject to a rigorous human rights assessment. 

Independent ethical and regulatory oversight must of course be applied, with a centralized supervisory authority put in place. And clear policies should be published setting out details of the proposed use. 

Impacted communities should be consulted and data published detailing the success or failure of deployments and human rights assessments.

The European Commission has proven itself to be an effective regulator in the past. So, let’s regulate FRT. A blanket ban will only benefit the criminals.

Tony Porter is the Chief Privacy Officer at Corsight AI and the former UK Surveillance Camera Commissioner, and Dr Nicole Benjamin Fink is the Founder of Conservation Beyond Borders.

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

Source link

#Facial #recognition #technology #regulated #banned

Eva Kaili is back with a new story: There’s a conspiracy

ATHENS — Eva Kaili is spinning up a new, eyebrow-raising narrative: Authorities might have targeted her because she knew too much about government spying.

After months of silence during her detention and house arrest, the most high-profile suspect in the cash-for-influence Qatargate scandal was suddenly everywhere over the weekend. 

Across a trio of interviews in the European media, the Greek European Parliament member was keen to proclaim her innocence, saying she never took any of the alleged bribes that authorities say countries such as Qatar and Morocco used to sway the Brussels machinery. 

But she also had a story to tell even darker than Qatargate, one involving insinuations of nefarious government spying and suggestions that maybe, just maybe, her jailing was politically motivated. Her work investigating the illegal use of Pegasus spyware in Europe, she argued, put her in the crosshairs of Europe’s own governments. 

“From the court file, my lawyers have discovered that the Belgian secret services have allegedly been monitoring the activities of members of the Pegasus special committee,” she told the Italian newspaper Corriere Della Sera.

“The fact that elected members of Parliament are being spied on by the secret services should raise more concerns about the health of our European democracy,” she added. “I think this is the ‘real scandal.’”

As Kaili reemerges and starts pointing the finger back at the government, the Belgian prosecutor’s office has decided to remain mum. A spokesperson on Monday said the prosecutor’s office was “not going to respond” to Kaili’s allegations. 

“This would violate the confidentiality of the investigation and the presumption of innocence,” the spokesperson added. “The evidence will be presented in court in due course.”

But her PR blitz is nonetheless a likely preview of Qatargate’s next chapter: The battle to win the public narrative.

A European media tour

In addition to her interview with the Italian press, Kaili also appeared in the Spanish and French press, where she expanded on her spying theory. 

In a video interview with the Spanish newspaper El Mundo, Kaili said her legal team has evidence the entire PEGA committee was being watched illegally, arguing she does not know how the police intercepted certain conversations between her and other politicians. 

“I was not spied on with Pegasus, but for Pegasus,” she said. “We believe Morocco, Spain, France and Belgium spied on the European Parliament’s committee,” she told El Mundo.

Kaili’s assertions have not been backed up by public evidence. But she didn’t equivocate as she pointed the finger.

“The fact that security services surveilled elected members of Parliament should raise enormous concerns over the state of European democracy,” Kaili said. “This goes beyond the personal: We have to defend the European Parliament and the work of its members.”

Kaili was jailed in December as part of a deep corruption probe Belgian authorities were conducting into whether foreign countries were illegally influencing the European Parliament’s work. Her arrest came after the Belgian police recovered €150,000 in cash from her apartment — where she lived with her partner, Francesco Giorgi, who was also arrested — and a money-stuffed bag her father had.

The Greek politician flatly dismissed the charges across her interviews.

“No country has ever offered me money and I have never been bribed. Not even Russia, as has been alleged,” she told El Mundo. “My lawyers and I believe this was a police operation based on false evidence.”

According to her arrest warrant, Kaili was suspected of being “the primary organizer or co-organizer” of public corruption and money laundering.

“Eva Kaili told the journalist of ‘El Mundo’ not to publish her interview, until she gave them the final OK; unfortunately, the agreement was not honored,” her lawyer Michalis Dimitrakopoulos said on Monday.

Flying in on a Pegasus (committee)

The allegations — Kaili’s first major push to spin her arrest — prompted plenty of incredulity, including from those who worked with her on the Pegasus, or PEGA, committee. It especially befuddled those who recalled that Kaili had faced accusations of undermining the committee’s work. 

“I have absolutely no reason to believe the Belgian intelligence services spied on PEGA,” said Dutch MEP Sophie in ‘t Veld, who helped prepare the committee’s final report. “Everything we do is public anyway. And we have our phones checked regularly, it makes absolutely no sense.”

Kaili’s decision to invoke her PEGA Committee work is intriguing as it taps into a controversial period of her career. 

While the panel was deep into its work in 2022, Greece was weathering its own persistent espionage scandal, which erupted after the government acknowledged it had wiretapped the leader of Kaili’s own party, Pasok. 

Yet Kaili perplexed many when she started publicly arguing in response that surveillance was common and happens across Europe, echoing the talking points of the ruling conservative government instead of her own socialist party. She also encouraged the PEGA panel not to visit Greece as part of its investigation.

The arrest warrant for MEP Andrea Cozzolino also mentions the alleged influence ringleader, former Parliament member Pier Antonio Panzeri, discussed getting Kaili on the PEGA Committee to help advance Moroccan interests (Morocco has been accused of illegally using the spyware).

A war of words?

Kaili’s media tour raises questions about how the Qatargate probe will unfold in the coming months. 

Eventually, Kaili and the other suspects will likely face trial, where authorities will have a chance to present their evidence. But until then, the suspects will have a chance to shape and push their preferred narrative — depending on what limits the court places on their public statements.

In recent weeks, Kaili has moved from jail to house arrest to an increasingly unrestricted life, allowing her more chances to opine on the case. Her lawyers also claim she will soon be back at work at the Parliament, although she is banned from leaving Belgium for Parliament’s sessions in Strasbourg.

Pieter Haeck, Eddy Wax, Antoaneta Roussi and Barbara Moens contributed reporting.

Source link

#Eva #Kaili #story #conspiracy

Retrospective facial recognition tech conceals human rights abuses

By Ella Jakubowska, EDRi, Hajira Maryam and Matt Mahmoudi, Amnesty International

Following the burglary of a French logistics company in 2019, facial recognition technology (FRT) was used on security camera footage of the incident in an attempt to identify the perpetrators. 

FRT works by attempting to match images from, for example, closed-circuit television (CCTV) cameras to databases of often millions of facial images, in many cases collected without knowledge and consent. 

In this case, the FRT system listed two hundred people as potential suspects. 

From this list, the police singled out ‘Mr H’ and charged him with the theft, despite a lack of physical evidence to connect him to the crime.

At his trial, the court refused a request from Mr H’s lawyer to share information about how the system compiled the list, which was at the heart of the decision to charge Mr H. 

The judge decided to rely on this notoriously discriminatory technology, sentencing Mr H to 18 months in prison.

Indicted by facial recognition

“Live” FRT is often the target of (well-earned) criticism, as the technology is used to track and monitor individuals in real time. 

However, the use of facial recognition technology retrospectively, after an incident has taken place, is less scrutinised despite being used in cases like Mr H’s. 

Retrospective FRT is made easier and more pervasive by the wide availability of security camera footage and the infrastructures already in place for the technique.

Now, as part of negotiations for a new law to regulate artificial intelligence (AI), the AI Act, EU governments are proposing to allow the routine use of retrospective facial recognition against the public at large — by police, local governments and even private companies.

The EU’s proposed AI Act is based on the premise that retrospective FRT is less harmful than its “live” iteration.

The EU executive has argued that the risks and harms can be mitigated with the extra time that retrospective processing affords.

This argument is wrong. Not only does the extra time fail to tackle the key issues — the destruction of anonymity and the suppression of rights and freedoms — but it also introduces additional problems.

‘Post’ RBI: The most dangerous surveillance measure you’ve never heard of?

Remote Biometric Identification, or RBI, is an umbrella term for systems like FRT that scan and identify people using their faces — or other body parts — at a distance. 

When used retrospectively, the EU’s proposed AI Act refers to it as “Post RBI”. Post RBI means that software could be used to identify people in a feed from public spaces hours, weeks, or even months after it was captured. 

For example, running FRT on protesters captured on CCTV cameras positioned. Or, as in the case of Mr H, to run CCTV footage against a government database of a staggering 8 million facial images.

The use of these systems produces a chilling effect in society; on how comfortable we feel attending a protest, seeking healthcare — such as abortion in places where it is criminalised — or speaking with a journalist.

Just knowing that retrospective FRT may be in use could make us afraid of how information about our personal lives could be used against us in the future.

FRT can feed racism, too

Research suggests that the application of FRT disproportionately affects racialised communities. 

Amnesty International has demonstrated that individuals living in areas at greater risk of racist stop-and-search policing — overwhelmingly affecting people of colour — are likely to be more exposed to more data harvesting and invasive facial recognition technology.

For example, Dwreck Ingram, a Black Lives Matter protest organiser from New York, was harassed by police forces at his apartment for four hours without a warrant or legitimate charge, simply because he had been identified by post RBI following his participation in a Black Lives Matter protest. 

Ingram ended up in a long legal battle to have false charges against him dropped after it became clear that the police had used this experimental technology on him.

The list goes on. Robert Williams, a resident of Detroit, was falsely arrested for theft committed by someone else. 

Randall Reid was sent to jail in Louisiana, a state he’d never visited because the police wrongly identified him as a suspect in a robbery with FRT. 

For racialised communities, in particular, the normalisation of facial recognition is the normalisation of their perpetual virtual line-up.

If you have an online presence, you’re probably already in FRT databases

This dystopian technology has also been used by football clubs in the Netherlands to scan for banned fans and wrongly issue a fine to a supporter who did not attend the match in question. 

Reportedly it has also been used by police in Austria against protesters and in France under the guise of making cities “safer” and more efficient, but in fact, increasing mass surveillance.

These technologies are often offered at low-to-no cost at all. 

One company offering such services is Clearview AI. The company has offered highly invasive facial recognition searches to thousands of law enforcement officers and agencies across Europe, the US and other regions. 

In Europe, national data protection authorities have taken a strong stance against these practices, with Italian and Greek regulators fining Clearview AI millions of euros for scraping the faces of EU citizens without legal basis. 

Swedish regulators fined the national police for unlawfully processing personal data when using Clearview AI to identify individuals.

AI Act could be a chance to end abuse of mass surveillance

Despite these promising moves to protect our human rights from retrospective facial recognition by data protection authorities, EU governments are now seeking to implement these dangerous practices regardless.

Biometric identification experiments in countries across the globe have shown us over and over again that these technologies, and the mass data collection it entails, erode the rights of the most marginalised people, including racialised communities, refugees, migrants and asylum seekers.

European countries have begun to legalise a range of biometric mass surveillance practices, threatening to normalise the use of these intrusive systems across the EU. 

This is why, more than ever, we need strong EU regulation that captures all forms of live and retrospective biometric mass surveillance in our communities and at EU borders, including stopping Post RBI in its tracks.

With the AI Act, the EU has a unique opportunity to put an end to rampant abuse facilitated by mass surveillance technologies. 

It must set a high standard for human rights safeguards for the use of emerging technologies, especially when these technologies amplify existing inequalities in society.

Ella Jakubowska is a Senior Policy Advisor at European Digital Rights (EDRi), a network collective of non-profit organisations, experts, advocates and academics working to defend and advance digital rights across the continent.

Hajira Maryam is a Media Manager, and Matt Mahmoudi is an AI and Human Rights Researcher at Amnesty Tech, a global collective of advocates, campaigners, hackers, researchers & technologists defending human rights in a digital age.

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

Source link

#Retrospective #facial #recognition #tech #conceals #human #rights #abuses

China turbocharging crackdown on Iranian women, say experts

Iran’s government has added yet another weapon to its arsenal of oppression.

On Saturday, authorities announced they were installing cameras in public places that can identify and punish women who do not wear a headscarf, as mandated by Iranian law.

Those detected not covering their hair will receive a “warning text message”, as reports suggest Iranian officials effectively want to replace the unpopular morality police that enforces the rules with surveillance.

But Iran is not acting alone.

Though it has not publicly said so, Craig Singleton, Senior Fellow at the US-based Foundation for the Defense of Democracies “highly suspects” the high-tech cameras came from China.

Cemented by a secretive 25-year cooperation agreement struck in 2021, Beijing has helped Iran’s beleaguered regime build an intricate surveillance state, prompting some commentators to warn Iranians face a “dystopian future”.

Facial recognition technology and powerful tools for video and crowd surveillance, phone and text monitoring have all been supplied by Chinese companies, while Iranian government officials have reportedly received training on matters such as “manipulating public opinion”.

‘Gender segregation’

While the impact of the growing “surveillance state” is ubiquitous, touching all of Iran’s some 88 million people, women are particularly targeted.

“Technology continues to restrict the movement of women in Iran and prevent them from enjoying basic freedoms, like going to the spaces they want to or dressing how they like,” said Melody Kazemi of Filterwatch, a monitoring group of online censorship in Iran.

“It’s contributing to their treatment as second-class citizens, allowing women to continue to be arrested, intimated or harassed.”

Starting in September, mass anti-government protests swept through Iran after the death of 22-year-old Mahsa Amini, who was arrested for allegedly violating the country’s strict dress code.

The women-led protests eventually subsided amid a torrent of violence and repression by the state, including mass arrests and executions.

Still, technology from Beijing has helped “prop up” the deeply unpopular Islamic government, Singleton told Euronews. 

While it could not “neutralise the root causes of unrest”, he said: “For now, such technologies appear necessary, albeit insufficient, for authoritarian regimes like Iran to completely eradicate all forms of dissent.”

Online tools played a central role in the regime’s crackdown on protests last year, with mobile and internet surveillance used to retrospectively identify and detain demonstrators.

China reportedly sold Tehran a powerful surveillance system capable of monitoring landline, mobile and internet communications, though a wide variety of equipment is likely to have been used in the crackdown. 

‘It’s been happening for a long time now’

Using technology for suppression has a long history in Iran. 

“It’s not a new thing,” said Kazemi. “We shouldn’t forget that Iran already has used older technologies and pre-existing methods to oppress women, dissidents and opposition.”

In 2019, Iranian police set up an automated system of cameras to warn women flouting the dress code in their cars. 

Hundreds received text messages summoning them to the so-called morality police. However, Kazemi says there were “all sorts of false positives”, with long-haired men getting told off for not wearing hijabs.

“No matter where these technologies come from. Even in a democratic country with an independent judiciary, they go wrong and produce errors all the time,” she said. “They are designed in a way that automates human rights violations.”

Behind China-Iran cooperation is, of course, money.

Chinese surveillance firms, like Tiandy, Hikvision, and Dahua, are “keenly focused” on finding new markets outside of mainland China, which is already “saturated” with intrusive surveillance, claims Singleton.

Part of this is testing whether Chinese tech can be rolled out overseas.

“Iran has transformed itself into a Middle East incubator for Beijing’s techno-authoritarianism, in essence enabling Chinese firms to deploy their systems abroad to determine whether they are compatible with non-Chinese networks,” said Singleton.

He called such “interoperability” essential if these “Chinese firms want to market their surveillance products to other authoritarian regimes.”

But there are geopolitical motives, too.

“Beijing’s great-power ambitions hinge, in part, on… [its] technological supremacy,” said Singleton – something “the US and its allies have fallen short in countering.”

Much high-tech equipment has been developed amid China’s repression of Uyghur Muslims and other ethnic minorities, which the US has called a “genocide”. It has involved monitoring smartphone activity and gathering biometric data, including DNA, blood type, fingerprints, voice recordings and face scans, alongside mass detentions and sterilization.

‘There’s a lot we still don’t know’

But Beijing is not the only country driving Iran’s technological control, with some technologies being homegrown.

An investigation by The Intercept found that authorities had baked SIAM spyware into the country’s mobile networks to track, decrypt messages and block internet access on smartphones.

Other stuff comes from the West.

In December, the US blacklisted a Chinese video surveillance company Tiandy Technologies provided facial recognition technology to Iran’s Revolutionary Guard Corps, widely considered the true power-makers in Iran. 

The processors for its video recording systems were reportedly made by the US semiconductor giant Intel Corp, though it “ceased doing business with Tiandy following an internal review.” 

Still, researcher Kazemi said big questions hung over what technologies were being used and where they came from.

She warned that Iran’s regime could also be overinflating claims around the tech it had in a bid to intimidate people and deter future dissent.

“Just because the Iran government says they are using this type of technology, I wouldn’t rely on it one way or another,” she said. “It could just be rhetoric.”

“There is an appetite from the government to suggest that they are getting better and more efficient as more and more people are trying to resist.”

In any case, Melody said more research was needed into technology and its uses around the world, with much currently shrouded in mystery. 

“We need to get more accurate information to give people better advice on how to resist them.”

Source link

#China #turbocharging #crackdown #Iranian #women #experts

Latest downed objects likely had ‘benign purpose’, says US

The three still-unidentified aerial objects shot down by the US in the past week likely had merely a “benign purpose,” the White House acknowledged Tuesday, drawing a distinction between them and the massive Chinese balloon that earlier traversed the US with a suspected goal of surveillance.

“The intelligence community is considering as a leading explanation that these could just be balloons tied to some commercial or benign purpose,” said White House national security spokesman John Kirby.

Officials also disclosed that a missile fired at one of the three objects, over Lake Huron on Sunday, missed its intended target and landed in the water before a second one successfully hit.

The new details came as the Biden’s administration’s actions over the past two weeks faced fresh scrutiny in Congress.

First, US fighter jets didn’t shoot down what officials described as a Chinese spy balloon until after had crossed much of the United States, citing safety concerns. Then the military deployed F-22 fighters with heat-seeking missiles to quickly shoot down what likely were harmless objects.

Taken together, the actions raised political as well as security questions, about whether the Biden administration overreacted after facing Republican criticism for reacting too slowly to the big balloon.

Even as more information about the three objects emerges, questions remain about what they were, who sent them and how the US might respond to unidentified airborne objects in the future.

Still unaddressed are questions about the original balloon, including what spying capabilities it had and whether it was transmitting signals as it flew over sensitive military sites in the United States. It was believed by American intelligence to have initially been on a track toward the US territory of Guam, according to a US official.

The US tracked it for several days after it left China, said the official, who spoke to The Associated Press on condition of anonymity to discuss sensitive intelligence. It appears to have been blown off its initial trajectory and ultimately flew over the continental US, the official said.

Balloons and other unidentified objects have been previously spotted over Guam, a strategic hub for the US Navy and Air Force in the western Pacific.

It’s unclear how much control China retained over the balloon once it veered from its original trajectory. A second US official said the balloon could have been externally maneuvered or directed to loiter over a specific target, but it’s unclear whether Chinese forces did so.

Even less is known about the three objects shot down over three successive days, from Friday to Sunday, in part because it’s been challenging to recover debris from remote locations in the Canadian Yukon, off northern Alaska and near the Upper Peninsula of Michigan on Lake Huron. So far, officials have no indication they were part of a bigger surveillance operation along with the balloon that that was shot down off the South Carolina coast on Feb. 4.

“We don’t see anything that points right now to being part of the PRC spy balloon program,” Kirby told reporters, referring to the People’s Republic of China. It’s also not likely the objects were “intelligence collection against the United States of any kind — that’s the indication now.”

No country or private company has come forward to claim any of the objects, Kirby said. They do not appear to have been operated by the U.S. government.

Kirby had hinted Monday that the three objects were different in substantive ways from the balloon, including in their size. And his comments Tuesday marked a clear effort by the White House to draw a line between the balloon, which officials believe was part of a Chinese military program that has operated over five continents, and objects that the administration thinks could simply be part of some research or commercial effort.

In Washington, Pentagon officials met with senators for a classified briefing on the shootdowns. Lawmakers conveyed concerns from their constituents about a need to keep them informed and came away assured the objects were not extraterrestrial in nature but wanting many more details.

Still, Sen. Thom Tillis, R-N.C., said the successful recent interceptions were likely to have a “calming influence” and make future shootdowns less likely.

Sen. Lindsey Graham, R-S.C., told reporters after the briefing that he didn’t think the objects posed a threat.

“They’re trying to figure out — you know there’s a bunch of junk up there. So you got to figure out what’s the threat, what’s not. You see something, you shouldn’t always have to shoot it down,” Graham said.

Biden has ordered National Security Adviser Jake Sullivan to form an interagency team to study the detection, analysis and “disposition of unidentified aerial objects” that could pose either safety or security risks.

The recent objects have also drawn the attention of world leaders including in Canada, where one was shot down on Saturday, and in the United Kingdom, where the prime minister has ordered a security review. 

Japan’s Defense Ministry said Tuesday that at least three flying objects spotted in Japanese airspace since 2019 are strongly believed to have been Chinese spy balloons.

Meanwhile, U.S. officials confirmed that a first missile aimed at the object over Lake Huron landed instead in the water, but that a second one hit the target.

Gen. Mark Milley, chairman of the Joint Chiefs of Staff, said the military went to “great lengths” to make sure none of the strikes put civilians at risk, including identifying what the debris field size was likely to be and the maximum effective range of the missiles used.

“We’re very, very careful to make sure that those shots are in fact safe,” Milley said. “And that’s the guidance from the president. Shoot it down, but make sure we minimize collateral damage and we preserve the safety of the American people.”

The object taken down Sunday was the third in as many days to be shot from the skies. The White House has said the objects differed in size and maneuverability from the Chinese surveillance balloon that US fighter jets shot down earlier this month, but that their altitude was low enough to pose a risk to civilian air traffic.

Weather challenges and the remote locations of where the three objects were shot down over Alaska, Canada and Lake Huron have impeded recovery efforts so far.

Milley was in Brussels with Defense Secretary Lloyd Austin to meet with members of the Ukraine Defense Contact Group on additional weapons and defense needs for Kyiv in advance of Russia’s anticipated spring offensive.


Source link

#Latest #downed #objects #benign #purpose

US military shoots down fourth flying object after Great Lakes airspace closure

A US fighter jet shot down an “unidentified object” over Lake Huron on Sunday on orders from President Joe Biden. It was the fourth such downing in eight days and the latest military strike in an extraordinary chain of events over US airspace that Pentagon officials believe has no peacetime precedent.

Part of the reason for the repeated shootdowns is a “heightened alert” following a spy balloon from China that emerged over US airspace in late January, Gen. Glen VanHerck, head of NORAD and US Northern Command, said in a briefing with reporters.

Since then, fighter jets last week also shot down objects over Canada and Alaska. Pentagon officials said they posed no security threats, but so little was known about them that Pentagon officials were ruling nothing out — not even UFOs.

“We have been more closely scrutinising our airspace at these altitudes, including enhancing our radar, which may at least partly explain the increase,” said Melissa Dalton, assistant defence secretary for homeland defence.


US authorities have made clear that they constantly monitor for unknown radar blips, and it is not unusual to shut down airspace as a precaution to evaluate them.

But the unusually assertive response was raising questions about whether such use of force was warranted, particularly as administration officials said the objects were not of great national security concern and the downings were just out of caution.

VanHerck said the US adjusted its radar so it could track slower objects. “With some adjustments, we’ve been able to get a better categorization of radar tracks now,” he said, “and that’s why I think you’re seeing these, plus there’s a heightened alert to look for this information.”

He added: “I believe this is the first time within United States or American airspace that NORAD or United States Northern Command has taken kinetic action against an airborne object.”

Asked if officials have ruled out extraterrestrials, VanHerck said, “I haven’t ruled out anything at this point.”

The Pentagon officials said they were still trying to determine what exactly the objects were and said they had considered using the jets’ guns instead of missiles, but it proved to be too difficult. They drew a strong distinction between the three shot down over this weekend and the balloon from China.

Minnesota Gov. Tim Walz tweeted that airmen in the 148th Fighter Wing, an Air National Guard fighter unit in Duluth, shot down the object over Lake Huron.

The extraordinary air defense activity began in late January, when a white orb the officials said was from China appeared over the US and hovered above the nation for days before fighter jets downed it off the coast of Myrtle Beach, South Carolina.

That event played out over livestream. Many Americans have been captivated by the drama playing out in the skies as fighter jets scramble to shoot down objects.

The latest brought down was first detected on Saturday evening over Montana, but it was initially thought to be an anomaly. Radar picked it up again Sunday hovering over the Upper Peninsula of Michigan and it was going over Lake Huron, Pentagon officials said Sunday.

US and Canadian authorities had restricted some airspace over the lake earlier Sunday as planes were scrambled to intercept and try to identify the object. According to a senior administration official, the object was octagonal, with strings hanging off, but had no discernable payload.

It was flying low at about 20,000 feet, said the official who spoke to The Associated Press on condition of anonymity to discuss sensitive matters.

Meanwhile, US officials were still trying to precisely identify two other objects shot down by F-22 fighter jets, and were working to determine whether China was responsible as concerns escalated about what Washington said was Beijing’s large-scale aerial surveillance program.

An object shot down Saturday over Canada’s Yukon was described by US officials as a balloon significantly smaller than the balloon — the size of three school buses — hit by a missile Feb. 4. A flying object brought down over the remote northern coast of Alaska on Friday, was more cylindrical and described as a type of airship.

Both were believed to have a payload, either attached or suspended from them, according to the officials who spoke to The Associated Press on condition of anonymity to discuss the ongoing investigation. Officials were not able to say who launched the objects and were seeking to figure out their origin.

The three objects were much smaller in size, different in appearance and flew at lower altitudes than the suspected spy balloon that fell into the Atlantic Ocean after the US missile strike.

The officials said the other three objects were not consistent with the fleet of Chinese aerial surveillance balloons that targeted more than 40 countries, stretching back at least into the Trump administration.

Senate Majority Leader Chuck Schumer told ABC’s “This Week” that US officials were working quickly to recover debris. Using shorthand to describe the objects as balloons, he said US military and intelligence officials were “focused like a laser” on gathering and accumulating the information, then compiling a comprehensive analysis.

“The bottom line is until a few months ago we didn’t know about these balloons,” Schumer, D-NY, said of the spy program that the administration has linked to the People’s Liberation Army, China’s military. “It is wild that we didn’t know.”

Eight days ago, F-22 jets downed the large white balloon that had wafted over the US for days at an altitude of about 60,000 feet. US officials immediately blamed China, saying the balloon was equipped to detect and collect intelligence signals and could maneuver itself. White House officials said improved surveillance capabilities helped detect it.

China’s Foreign Ministry said the unmanned balloon was a civilian meteorological airship that had blown off course. Beijing said the US had “overreacted” by shooting it down.

Then, on Friday, North American Aerospace Defense Command, the combined US-Canada organization that provides shared defense of airspace over the two nations, detected and shot down an object near sparsely populated Deadhorse, Alaska.

Later that evening, NORAD detected a second object, flying at a high altitude over Alaska, US officials said. It crossed into Canadian airspace on Saturday and was over the Yukon, a remote territory, when it was ordered shot down by Prime Minister Justin Trudeau.

In both of those incidents, the objects were flying at roughly 40,000 feet. The object on Sunday was flying at 20,000 feet.

The cases have increased diplomatic tensions between the United States and China, raised questions about the extent of Beijing’s American surveillance, and prompted days of criticism from Republican lawmakers about the administration’s response.


Source link

#military #shoots #fourth #flying #object #Great #Lakes #airspace #closure