There has been much discussion recently about artificial intelligence (“AI”). The use of AI systems (such as the well-known Chat GPT) has rapidly become widespread and the possibilities it offers seem endless. However, significant risks (for example ethical and privacy) are also attached to use of such systems.
The European Union has therefore decided to create a legislative framework for the use of “AI systems”. On 1 August 2024, the “Artificial Intelligence Act” (“AI Act” for short) officially entered into force. The AI Act is a “regulation”, that is European legislation that is directly applicable throughout the European Union.
What exactly are AI systems?
The AI Act uses a broad definition of AI systems. It covers all systems that are “machine-based” and can therefore learn from data to adapt to new information. This includes a range of technologies, from simple algorithms to highly complex machine learning models.
The AI Act divides AI systems into several categories, taking into account the level of risk:
- Systems with an “unacceptable risk”: these are systems which, for example, evaluate people based on social behaviour or emotions (so-called “social scoring”). European legislators consider that such systems are a flagrant violation of the fundamental rights of EU citizens and have therefore decided that such systems will be banned.
- “High risk” systems: This category includes AI systems used for certain critical sectors (critical infrastructure, education, human resources, the financial sector, etc.). This category of AI systems is permitted but is subject to strict rules.
- “General Purpose AI” (or “GPAI”) systems. These are AI systems that have applications in various sectors. Because of their general application and size, this type of AI system is also regulated.
- Finally, there are the AI systems with “limited risk”. This category includes the now familiar chatbots or personal assistants. For such systems to be used it must be clear that the user is conversing with a machine.
This classification allows the rules for each type of AI system to be defined based on risk.
Scope
The AI Act applies to:
- Providers of AI systems within the EU (regardless of whether the provider itself is based in the EU or is outside it).
- Users of AI systems within the EU.
As with the GDPR, European legislators are now also targeting companies that are based outside of the EU but want to put their AI systems on the EU market.
Entry into force
The AI Act will enter into force incrementally. Some important dates are:
- From 2 February 2025,AI systems that present an unacceptable risk will be banned within the European Union.
- From 2 August 2025 (i.e. 12 months after the AI Act comes into force) rules on the use of GPAI systems will also enter into force.
- As of 2 August 2027 (36 months after the AI Act comes into force) the requirements concerning “high risk” AI Systems will likewise enter into force definitively.
The aim of the AI Act is to allow the legislation to evolve with the technology – updates and amendments will therefore also be necessary.
Enforceability
previously for privacy (with the well-known “GDPR”).
The intention is therefore – as with the GDPR – to make the legislation enforceable against companies that do not comply with the new rules. As for the GDPR, there will be an “AI regulator” in each member state to oversee the rules (with the competence to issue fines). For the big “GPAI systems” there will also be an EU regulator to ensure appropriate monitoring.
Depending on the type of infringement, fines may be issued up to a certain percentage of the annual global turnover. For example, fines for the most serious category of breaches may go up to EUR 35,000,000 or 7% of global turnover.
Experience with the GDPR shows that such fines are actually issued, and the EU is serious about protecting its citizens.
So what exactly does this legislation mean for entrepreneurs?
Entrepreneurs will have to find out which AI systems are being used within their company, either knowingly or unknowingly, and in which category these systems will be classed under the AI Act.
Based on this analysis, it will then be possible to determine which (internal) rules need to be introduced, for example on how employees should deal with AI systems, and to provide the necessary training.
Concretely, by February 2025, companies will have to take measures to ensure that employees who deal with AI systems have an adequate level of knowledge of AI.
Obviously, our experts at PKF BOFIDI Legal can offer support and guidance on this. Feel free to contact our team at info@pkfbofidilegal.com.