As the negotiation phase of Europe’s AI Act begins, both questions and praise pour in

On June 14, the European Parliament adopted its negotiating position on the draft EU Artificial Intelligence (AI) Act, which takes a risk-based approach to regulating AI, going from minimal, limited, and high to unacceptable risk. With this, the Act has entered into a phase of negotiations between the three branches of the European Union — the European Parliament, the European Commission and the EU Council, a process known as Trilogue negotiations — before it becomes law.

AI systems posing minimal or limited risks to health and safety or fundamental rights of people and environment will be allowed with some transparency requirements, according to the Act. AI systems posing high risk will have stricter obligations whereas ones posing unacceptable risks will be banned. Using AI for biometric surveillance, emotion recognition and predictive policing, for instance, are banned under the current draft. There’s also an obligation for generative AI systems to disclose AI generated content.

A risk-based approach to regulating AI
Systems posing minimal or limited risks to health and safety or fundamental rights of people and environment will be allowed with some transparency requirements
AI systems posing high risk will have stricter obligations
AI systems unacceptable risks will be banned

The plan is to have the AI Act ready by the end of 2023 to be put to vote before the European Parliament elections in June 2024.

Mixed reception

The draft legislation has triggered a mixed response from industry stakeholders as well as experts, with some calling it a move to make AI trustworthy, while others see it as a roadblock to innovation.

“The EU AI Act is a bold legislation rooted in fundamental rights that provides citizen-centric protection,” Gaurav Sharma, AI adviser for the German Agency for International Cooperation (GIZ) India, said, speaking in his personal capacity.

Berlin-based Udbhav Tiwari, Mozilla’s Head of Global Product Policy, concurred. “At Mozilla, we think the EU AI Act is something necessary to make AI more trustworthy, not just in the EU but globally. AI is too risky a technology to not have guardrails,” Mr. Tiwari said.

As has been the case with a lot of AI related developments since the launch of the generative AI tool, ChatGPT, in November 2022, the AI Act has also prompted its fair share of open letters. More than 150 executives and European companies have signed an open letter calling on the EU Commission to reconsider some aspects of the AI Act. “In our assessment, the draft legislation would jeopardise Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing,” this letter notes.

Within the start-up community in Germany, headed by Indian-origin founders, the consensus is not as clear cut. According to the Migrant Founders Monitor 2023, 21% of founders in Germany have a migration background.

Appu Shaji is one of them.

Mr. Shaji founded Mobius Labs in 2018 in Berlin. The startup offers an AI powered computer vision technology that’s easy to train and customise, and is delivered in the form of a software development kit. Mr. Shaji liked the objectives of the AI Act when it was first proposed in 2021, especially the risk-based approach. But since the last set of amendments, he is concerned.

“There seems to be a lot of political thinking that’s clouding the judgement around AI. Political leaders want to show that they are against large U.S. companies such as Open AI to prove a point,” Mr. Shaji said.

He recalled how in 2018, when the General Data Protection Regulation (GDPR) was launched, big technology companies such as Facebook (now Meta) and Google were portrayed as villains. “These big companies pay the fine and get away, but such regulations definitely kill homegrown competition,” Mr. Shaji added.

The other aspect that worries Mr. Shaji is the impact on research in AI using open source data models. LAION, a German non-profit making open-source AI models and datasets, along with research institutions and developers, also drafted an open letter to the European Parliament. One of their key recommendations was to ensure that open-source R&D can comply with the AI Act.

“The Act should promote open-source R&D and recognise the distinctions between closed-source AI models offered as a service, and AI models released as open-source code. Where appropriate, the Act should exempt open-source models from regulations intended for closed-source models,” the letter stated.

According to Mr. Shaji, a one-size-fits-all approach could give unfair advantage to big tech companies that already have deep pockets to build foundational AI models.

Roadblock to innovation?

“Current regulation plans need to change immediately in order for us to not lose yet another tech-race against the U.S. and China. They are too restrictive, too risk-averse and not agile enough,” reads a LinkedIn post from Rasmus Rothe, founder of AI investment firm Merantix and a board member of the German AI Association.

Mozilla’s Mr. Tiwari noted that statements claiming the AI Act will hamper innovations are looking at it purely from a business perspective rather than how it affects regular people. “A lot of the open letters signed by big tech companies point to future risks. While that is a good thing, one also has to address problems that are taking place currently because of some AI implementations,” Mr. Tiwari said.

While Mr. Shaji believes that the AI Act could be a role model for good regulation, attention also needs to be paid to its social impact. “If an AI model is used to surveil people without any accountability, the problem isn’t just the AI technique, but the surveillance without accountability which is a larger social aspect that needs regulation,” Mr. Shaji said.

Implications for Indian entrepreneurs

The AI Act will not just impact AI companies founded in Europe, but also non-European companies and startups deploying AI systems in the EU bloc. Each risk level has specific requirements that need to be complied with. But what happens when a startup is entering the EU market from India, where there is no data protection law or AI regulation in place?

“Data protection is the bedrock of good tech regulation. AI regulation comes on top of that. If local Indian entrepreneurs want to become global players, it’s imperative that they follow global best practices [such as the AI Act] and not wait for laws to be passed in India,” Mr. Tiwari said.

Mr. Sharma of GIZ noted that while India offers immense leeway for innovation, the government employs hard-stick technology policies. “Indian start-ups would find it easy to innovate in India. But due to lack of AI specific regulations, a lot has to be thought through. If there are unintended consequences due to AI algorithm implementation, then the government won’t hesitate to take action, such as banning the use of AI systems or filing a lawsuit, as the Indian AI strategy is rooted in ‘AI for All’,” Mr. Sharma said.

Mr. Sharma believes it is easier for Indian start-ups to enter the EU which has written AI mandates but can be difficult if they are not sure of how data protection, privacy or fundamental data governance works. The Digital Personal Data Protection Bill 2023 is expected to be tabled in the monsoon session of the Indian Parliament on July 20.

Compliance complexities

When the General Data Protection Regulation came into effect in May 2018 in Europe, there was an emergence of compliance lawyers to help organisations fulfill GDPR obligations. With the risk-based approach of the AI Act, where risk-level is determined by the AI algorithms and their outputs, compliance is a tricky area, note experts.

Article 49 of the AI Act states that high risk AI systems, products and services would need to pass strict safety and compliance requirements. Only then would they receive a CE certification, allowing them to operate in the European Economic Area. Non-compliance of the AI Act could invite fines up to €40 million or 7% of a company’s annual global turnover.

“There cannot be a compliance mapping in the same way as the GDPR compliance, as AI problems are more complex”Gaurav Sharma, AI adviser for the German Agency for International Cooperation India

According to Mr. Sharma, compliance would be a basic bottleneck as even the institutional capacities aren’t fully developed. “The AI regulatory officer has to be an expert in multiple domains, apart from those of technology and law. That’s a big ask. There cannot be a compliance mapping in the same way as the GDPR compliance, as AI problems are more complex,” Mr. Sharma said.

The EU has already set up the European Centre for Algorithmic Transparency in Spain, tasked with providing scientific and technical expertise to ensure start-ups and organisations follow regulatory obligations. Article 53 of the AI Act also mentions the setting up of AI regulatory sandboxes in EU member countries, to test AI systems before deployment.

Mobius Labs’ Mr. Shaji said he would wait till the Act became a law to comment on compliance matters. “Our startup would fall under the low-risk category as we handle data primarily for the media and entertainment sector. We have built tools that we license out to our clients,” Mr. Shaji said.

In addition to AI regulations, there are existing sectoral regulations (for instance, healthcare regulations) that need to be complied with.

Mr. Tiwari noted that compliance would have to be a combination of internal and external oversight. “Compliance is going to be a function of capacity. It will be different for large and small organisations (cost-wise). Regardless, the solution cannot be a weakening of the regulations. When new regulations are passed, entire ecosystems are built around compliance. Over time, the number of people in the AI-related regulatory disciplines will also go up,” Mr. Tiwari said, pointing to how the number of privacy professionals went up post-GDPR.

The German AI Association, a collective of 400 AI companies, noted in its policy paper that the AI Act must be accompanied with an investment plan to include funding and resources for start-ups and SMEs (small and medium enterprises) to comply with the AI Act. “This should be done not only through direct investment, but also through indirect investment, e.g. through funds that are more scalable, with clear requirements that the funding can only be used for European AI companies. Such funding would help mitigate the potential damage to European AI innovation caused by this regulation,” the policy paper notes.

Nimish Sawant is an independent journalist based in Berlin.

Source link

#negotiation #phase #Europes #Act #begins #questions #praise #pour