EU AI Act Basics
What is the EU AI Act? Whom does it apply to? What is expected from companies? What are the penalties of non-conformity?
EXPLAINER
3/11/20242 min read
The EU Artificial Intelligence Act (AI Act) is the first-ever legal framework in the world specifically designed to regulate Artificial Intelligence (AI). Its primary goal is to ensure the trustworthy development, deployment, and use of AI throughout the European Union.
Who does the EU AI Act Apply To?
The Act applies to a broad range of actors involved in the AI ecosystem, including:
AI developers: This encompasses companies and individuals creating AI systems.
AI providers: These are entities that make AI systems available to others, such as through sales or licensing.
AI deployers: This includes any organisation or person putting AI systems into operation, even if they didn't develop them.
Users of High-Risk AI: Anyone interacting with high-risk AI systems (defined below) also has specific obligations under the Act.
What Should Companies Do?
The specific actions companies need to take depend on the risk level of their AI system as outlined by the Act:
High-Risk AI: For systems deemed high-risk (e.g., AI for facial recognition in law enforcement), the Act lays out strict requirements. Companies must conduct risk assessments, implement mitigation strategies, maintain high data quality, ensure human oversight, and be transparent about the system's functioning.
Prohibited AI: Certain AI applications are entirely banned under the Act, such as social scoring used by governments to control citizens (as seen in China).
Low-Risk AI: For lower-risk AI (e.g., spam filters), companies should still demonstrate transparency and fairness in the system's design and use.
What is considered a High-Risk system?
The Act classifies AI systems into risk categories based on their potential impact on people's fundamental rights and safety. The full list of systems considered to be of high-risk is included in Annex III. Here are some examples:
Biometric identification systems: This includes facial recognition and other technologies used to identify individuals based on their physical characteristics. (e.g., Used in law enforcement or border control)
AI systems for critical infrastructure: This covers AI used to manage essential services like transportation networks or energy grids.
AI for employment purposes: This includes AI used in recruitment, performance evaluation, or granting promotions.
Additional Considerations:
The Act emphasizes the importance of AI literacy for all actors involved. Companies should ensure their staff and anyone involved with the AI system possess a sufficient understanding of its capabilities and limitations.
The EU AI Act represents a significant step towards responsible AI development and deployment. By adhering to its regulations, companies can ensure their AI systems are safe, fair, and trustworthy.
Fines:
The EU AI Act imposes fines for non-compliance. The severity depends on the type of infraction.
Worst offenses (like using banned AI) can incur fines of up to €35 million or 7% of a company's global annual turnover, whichever is higher.
Less serious offenses (like failing to meet transparency requirements) can result in fines of up to €15 million or 3% of annual turnover.
Providing false information attracts fines up to €7.5 million or 1% of annual turnover.
Fines are reduced for small businesses and start-ups. In deciding the penalty amount, authorities consider factors like the nature of the offense, its impact, and the company's size.
