top of page

EU Artificial Intelligence Act: Implications From the First AI Act

Conrad Rebello
  • Pre-regulation AI bloomed with promise, but bias concerns shadowed its rapid ascent.

  • The 2024 EU Act sets global AI governance standards with risk-based assessment.

  • EU Act tiers AI from banned to minimal risk, requiring compliance over time.

  • Fines imposed by AI Act can cripple firms if not followed; compliance is key to avoid a financial blow.

  • Sandboxes allow AI-based startups to test, refine, and gather data in a safe environment.



Article title with the words 'Artificial Intelligence' & 'AI Act' highlighted. The logo for Outproduct appears as well.


Before the EU AI Act:


Artificial intelligence (AI) wasn't always the ubiquitous technology it is today. It started with unassuming applications, like strategic game-playing programs and expert systems designed to mimic human knowledge in specific fields. These early successes, while limited in scope, paved the way for the remarkable AI revolution we're witnessing today. AI is making significant strides in industries we once couldn't have predicted, and its influence is rapidly expanding. This widespread adoption is fuelled in part by those who readily embrace AI's potential, leaving skeptics scrambling to catch up. The rise of generative AI, though still under development, has captured the public imagination with a glimpse of AI's capabilities.



A lady working on her laptop which is computing and generating ideas internally, using the Internet and its digital knowledge

However, this enthusiasm isn't without its concerns. The increasing popularity of AI has also exposed its vulnerabilities. This has been raising questions about potential biases and risks to fundamental rights. Not all AI systems are created equal, and even the most advanced can make errors or perpetuate biases within their decision-making processes. These flaws can have real-world consequences, impacting various aspects of our lives. As the use of AI continues to grow, so does the need to address these issues.



From Spark to Law: The EU's 2021 AI Regulation Takes Shape


The EU stars with a robot in the center

This is where the European Union (EU) steps in with its ground-breaking AI Act. In April 2021, the European Commission proposed a comprehensive regulatory framework for AI. The European Parliament gave the green light for approval & was soon unanimously approved by the EU Council in May 2024. This pioneering legislation is the first of its kind, aiming to protect EU member states and associated countries. The act sets a global standard for AI governance & regulation, paving the way for a future where trustworthy AI practices can flourish. To achieve this, the Act implements a risk-based approach. This is done by estimating potential high-risk AI systems or prohibited AI systems, considering factors such as intended use, potential harm, and data processing involved. The challenge lies in the fact that specific guidelines for such classification are not yet public and will only be released 18 months after the act will enter into force. The AI regulatory framework focuses primarily on risk assessment. AI systems, regardless of their application, will be evaluated based on the potential dangers they pose to users. This provides a clear framework for developers, allowing them to focus their resources. Furthermore, it reduces unnecessary burdens on developers working on less risky applications. A tiered system prioritizes the most stringent measures on applications with the greatest potential for harm.



The Act's Tiered Approach for AI Models


Companies will have between 6 months and 2 years to comply with the new AI legislation, depending on the risk level of the AI systems used by them. Before we dive in, let us identify the four major risk factors -


Different levels of risk are presented in a pyramid hierarchy, from unacceptable at the top to minimal at the bottom. High risk & Limited Risk lie in between the two.


Unacceptable Risk:


Certain applications deemed too dangerous are completely banned. This includes certain AI systems which make use of subliminal manipulation tools, discriminatory systems like social scoring, and predictive policing. A prime example is the social scoring systems used by governments. These systems make use of algorithms to judge and rank citizens based on a variety of factors, including financial history, online behaviour, and social connections. Citizens constantly monitored and scored might avoid expressing dissent for fear of lower rankings. This can stifle free speech, overall contributing towards a dystopian society.


High Risk:


High risk AI systems will be the most regulated systems allowed in the EU market. Stringent regulations apply to high-risk applications, such as those used in critical infrastructure, recruitment, healthcare and the administration of justice. These AI systems can potentially cause significant harm if they fail or are misused, for example real-time facial recognition. The European AI Act imposes stricter requirements on AI that's baked into already regulated products. This applies to AI features in medical devices, elevators, vehicles, or machinery. Since these products already go through third-party safety checks, the AI system within them needs extra scrutiny to ensure it doesn't compromise safety. Additionally, clear mechanisms for human oversight are crucial to ensure accountability and prevent unintended consequences.


Limited Risk:


Limited-risk AI, though not posing a direct threat, can still mislead users. To address this, the EU AI Act mandates transparency for such applications. Users must be informed they're interacting with AI, especially for AI applications with potential to mislead, or for AI systems that generate content that could be mistaken for real (such as deepfakes). The Act recognizes that certain AI with limited risk can have unintended consequences, particularly with powerful models like GPT-4. To mitigate these risks, rigorous testing and incident reporting are mandated for such models. For instance, a seemingly harmless chatbot conversation could perpetuate stereotypes based on the data it was trained on.


Minimal Risk:


Minimal-risk AI systems do not have any restrictions or mandatory obligations. These include applications such as use of AI in video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category. However, responsible AI development practices are still encouraged.


Existing AI systems get a grace period to comply with the Act's regulations after it's entry into force. For high-risk systems already placed on the market, they will have an extra 2-4 years, depending on their type. Even older high-risk systems don't need to comply until they're significantly updated. This gives businesses time to adjust but also creates an incentive to avoid major updates prior to the deadline.



Enforcement and Penalties


A bill beside a judge's gavel

To ensure compliance with the EU AI Act, companies should be aware of the significant financial penalties for non-compliance. Breaches involving banned AI systems can cost companies up to 7% of their global revenue or €35 million, whichever is higher. For violations related to high-risk AI transparency or general-purpose AI model (GPAI), the fines are lower, reaching a maximum of 3% of global revenue or €15 million. Providing false information to authorities attracts fines of up to 1% of global revenue or €7.5 million. It's crucial to understand these hefty fines and take proactive steps to comply with the Act's requirements to avoid financial repercussions.



Innovations in AI: The Sandbox Advantage


A vector of a half gear - half bulb within a digital boundary

The EU AI Act brings both challenges and opportunities for businesses using AI. While stricter regulations for high-risk applications will increase costs (data governance, oversight), "sandboxes" provided by national authorities will aid innovation. These sandboxes function as simulated real-world environments specifically designed for testing AI systems. By providing such access, national authorities are particularly aiming to support startups and SMEs. Within these sandboxes, developers can reap several benefits:


Controlled Training and Refinement:


AI models can be trained and on realistic data sets without the risks associated with real-world deployment. This allows for a more controlled environment to optimize performance and identify biases.


Proactive Risk Management:


Developers can proactively identify and address safety and security risks associated with their AI systems before release. This helps mitigates harm and ensures compliance with the Act's regulations.


Safe Data Collection and User Feedback:


Sandboxes provide a secure space to gather data and user feedback on AI models. This valuable information can be used to improve the model's effectiveness and user experience before it's exposed to a wider audience.



In Conclusion


The EU AI Act, while comprehensive and demanding, presents a positive step towards ensuring the responsible development and use of artificial intelligence. It casts a wide net, holding various players in the AI ecosystem accountable. This includes companies developing and selling AI systems (providers), those using the systems within the EU (deployers), importers and distributors, and even manufacturers incorporating AI into their products. Essentially, anyone involved in bringing AI systems to market or using them in the EU needs to comply with the Act's regulations. This, in turn, paves the way for advancements in Explainable AI (XAI) – crucial as AI becomes more integrated into our lives. The creation of a dedicated AI office, either on a national or international level, could play a crucial role in facilitating this ongoing conversation and ensuring responsible development. Ultimately, the Act promotes responsible innovation, building trust in AI for a future where humans and machines collaborate effectively.

コメント


bottom of page