fbpx

Podcast 🎧 & blog: Can EU regulation ensure a conscious use of AI?

20.03.2024 | Federico Plantera

The European Union’s AI Act is an initiative aimed at regulating the field of artificial intelligence. On Wednesday, March 13, the Parliament approved the regulation. It seeks to establish a legal framework for the use of AI, but also to position the EU at the forefront of global digital governance – at least on this aspect, and in this regard.

The approach is comprehensive, touching on a wide spectrum of applications, from low-risk to high-risk categories, tailoring regulatory requirements accordingly. Katrin Nyman-Metcalf, Adjunct Professor at TalTech and Associated Expert for e-Governance Academy, guides us through propositions and principles of the AI Act, and how the EU plans to move toward ensuring a thoughtful and ethical use of artificial intelligence. This podcast episode was recorded shortly previous to the approval of the EU AI Act.

The EU AI Act – principles, features, mission

“This is one of the first legal attempts by the EU to harmonise AI regulation across member states and protect against negative effects. The EU AI Act introduces a risk categorisation for AI, dividing it into categories based on the level of risk each poses. This approach dictates the level of regulation needed, focusing on what the technology does – rather than prescribing specific uses. It’s a general but effective method to ensure that AI development aligns with European values and standards. But it’s also a measure to protect consumers, users, people,” Nyman-Metcalf begins with.

The categorisation of AI systems into risk profiles is crucial here – in a range that goes from ‘minimal’ to ‘unacceptable’ risk. This risk-based approach allows for a regulatory framework that is fairly nuanced and can adapt to the diverse applications of AI, from consumer products to critical infrastructure. At the heart of the EU AI Act, after all, lies the ambition to safeguard European values and consumer rights while fostering a good environment for innovation. Thus, the Act’s dual focus: preventing fragmentation of AI regulations among member states and ensuring user and consumer protection. With an eye on the EU’s internal market dynamics, and one on its global competitiveness.

Moreover, the establishment of an EU AI office is expected to guide member states on the matter. “The EU AI office is set to play a coordinating role, not just overseeing regulation at the member state level but also facilitating dialogue with the industry and civil society. This approach, more proactive than previous initiatives like GDPR, aims to involve all relevant stakeholders from the outset, ensuring that the AI Act is shaped by a wide range of insights and concerns,” Nyman-Metcalf explains.

 

Engaging stakeholders to balance innovation and people’s rights

The EU AI Act has progressed through most stages of EU decision-making, from commission proposals to council and parliament discussions. And as expected by Nyman-Metcalf in this episode, the timing was right – March 13, 2024, and the European Parliament passed the regulation.

But the engagement of non-governmental stakeholders in the implementation of the EU AI Act will be key to its success. From industry experts to civil society organisations, many can bring unique perspectives and expertise to the table. It’s a multi-stakeholder approach, one that may ensure that regulation is comprehensive, and adaptable to the rapid advancements in AI technology. Industry involvement is essential.

“I think it’s very important to involve the industry. Those developing AI understand its complexities and potential risks, perhaps better than anyone. Their involvement is essential at every step of the regulatory process, given the unpredictability of AI advancements. This multi-stakeholder approach, including industry experts, policymakers, and civil society, is vital for implementing regulations that are both effective and adaptable to future technological innovations,” Nyman-Metcalf points out.

“We also need mechanisms for continuous dialogue and collaboration among these stakeholders. Because they can ensure that the regulation remains responsive to new developments and challenges in AI technology. The establishment of the EU AI office, as mentioned, is a step towards institutionalising this stakeholder engagement.”

The need for a balanced approach to regulation is clear, to keep aims and scope of the EU AI Act on track. On one hand, acknowledging the potential of AI to contribute significantly to societal advancements. On the other, recognising the ethical implications and risks associated with its use.

On EU values and lessons learned from the GDPR

Not that long ago, countries and organisations in the EU were battling over GDPR, the Union’s data protection regulation. What are the similarities with the current AI Act process, or the lessons learned from that experience?

Much like the GDPR, the AI Act is grounded in European values, emphasising the protection of individual rights and the promotion of a safe digital environment. The AI Act, however, extends to more and different domains. It does so by addressing the broader implications of AI on society, including ethical considerations and the potential for AI to impact fundamental human rights.

“Although the focus of the EU AI Act might seem different at first glance, it’s still fundamentally about protecting individuals – consumers and citizens alike. For example, the Act categorises as ‘unacceptable’ the risk of AI applications, such as toys, that could negatively influence children’s behavior. This protection of individual rights reflects the core European principle that the welfare of people is still at the core, even if it challenges industry or technological advancement,” Nyman-Metcalf explains.

Lessons from the GDPR development and implementation are already embedded in the AI Act. One of these lessons is the importance of regulatory clarity and applicability. Then, with a risk-based regulatory framework in place, the approach balances pragmatism with addressing the complex nature of AI technologies. By learning from the GDPR’s implementation, the AI Act seeks to avoid overly prescriptive regulations that could trouble innovation.

“If rules are too general or vague, there’s a risk that neither technological advancement will be supported, nor will people be adequately protected. The AI Act’s emphasis on transparency and risk management aims to navigate these challenges, ensuring that the development of AI technologies benefits society without compromising individual rights or safety,” Nyman-Metcalf concludes.

This podcast episode was recorded shortly previous to the approval of the EU AI Act on 13 March 2024