AI ACT: EU regulation on artificial intelligence

Summary of the main steps taken by the European Union in regulating the use of artificial intelligence up to recent developments.
AI ACT: la regolamentazione dell'UE sull'intelligenza artificiale

In today's society, artificial intelligence (AI) plays a fundamental role in the digital transformation that is taking place. In the wake of covid-19, AI has become a key tool for relaunching the economy; therefore, the EU has started to draft rules for its use in order to better manage the resources that AI possesses and at the same time manage the risks arising from its use.

The main stages

On 20 October 2020, the Parliament adopted three proposals setting out how the EU can effectively regulate AI to boost innovation, ethical standards and trust in the technology.

According to the Parliament, the rules must be centered on the person. They address issues related to security, transparency, accountability, how to avoid the creation of discrimination and how to ensure respect for fundamental rights.

MEP Axel Voss said that the aim of the AI ​​liability regime is to build trust by protecting citizens and encouraging innovation by giving businesses legal certainty.

One problem that certainly needs to be solved is that of intellectual property, we need to establish who owns something completely developed by AI.

In early 2021, the Parliament proposed guidelines on the use of artificial intelligence in specific sectors.

Initially in the military and civilian spheres, making it clear that AI should never replace or relieve humans of their responsibility in these areas. MEPs therefore stressed the need for human oversight of AI systems and reiterated Parliament's call for a ban on AI-enabled lethal autonomous weapons.

In May 2021, the report on the use of AI in education, culture and the audiovisual sector was adopted, stressing that AI technologies must be designed in such a way as to avoid any gender, social or cultural bias while protecting diversity.

Finally, in the autumn of the same year, MEPs called for safeguards for cases where artificial intelligence tools are used by law enforcement.

Subsequently, the European Parliament set up a special committee on artificial intelligence in a digital age (AIDA) to analyse the impact of AI on the EU economy. The final report of the AIDA committee, adopted by the plenary in May 2022, included a proposal for the next steps for the EU towards AI.

On 14 June 2023, the European Parliament set out its position on the AI ​​Act. The priority is to ensure that AI systems in use in the EU are safe, transparent, traceable and non-discriminatory.

The AI ​​ACT

A meeting is scheduled for December 6 that could give the green light to European Community legislation on artificial intelligence.

The AI ​​Act is composed of 85 articles, which outline the different artificial intelligence systems and their limits, prohibit certain applications and introduce safeguard procedures to protect European citizens from abuses and violations of fundamental rights.

However, the negotiations currently taking place between the Commission, Parliament and the European Council are encountering problems on some aspects right at the final stages.

The first point concerns which rules to apply to foundational models , that is, those forms of general artificial intelligence capable of performing different tasks (such as creating a text or an image) and trained on a huge amount of uncategorized data, such as the widely used chatGPT.

The second point of discussion concerns the path regarding the use of AI for tasks involving policing and surveillance.

As for the founding models, the proposed solution is to create two lanes on the obligations that the developers of these systems are required to respect. On one side we find the high-impact AI for which the application of the rules on cybersecurity, transparency of training processes and sharing of technical documentation is required before reaching the market. On the other, the remaining founding models for which the provisions of the European law on artificial intelligence come into play when the developers market their products.

At the end of October, the Parliament and the Council seemed to agree on identifying high-impact AI; however, the latter backed down, under pressure from big tech companies. And at the end of November, France, Germany and Italy asked to exclude founding models from the AI ​​Act.

Finally, the states gave the Presidency of the Council a mandate to negotiate.

On the second point, the Parliament has expressed itself totally against the use of AI in the field of surveillance, unlike the States that do not want to give up the possibility of using artificial intelligence to analyze large amounts of data, identify people, perform biometric recognition in real time. Furthermore, they do not want to miss the opportunity to use AI to predict the probability with which a crime can be committed, by whom and where.

So, the tension is between the Parliament, which is determined to introduce stringent rules, and the Council, which wants a more accommodating approach, especially on the issue of the police.

If the trilogue does not find a solution, the negotiations will fall through and will be discussed again in January, under the newly installed Belgian presidency.