Hero
Author
Tove Mylläri, Laura Isotalo, Ada-Maaria Hyvärinen
May 13, 2023
The EU’s Artificial Intelligence Act – what does it mean for us?
We looked into the proposal for the EU Artificial Intelligence Regulation, and we are sharing our main insights here.

We recommend reading this, as the stakes are no less than democracy and human rights as well as Europe’s ability to compete in the development and control of technology.

Over the past couple of years, the EU Artificial Intelligence Regulation (often referred to as the ‘AI Act’) has been a frequent topic of discussion among professionals working with AI. The regulation has raised many expectations but also concerns. There is no established definition of AI, so one central challenge of the regulation is to define exactly what it regulates. Some hope that the AI Act will clarify the rights and obligations related to using technology and create trust towards systems that use AI. Others think that the EU’s AI Act is focusing on the wrong things by attempting to regulate technology on a general level instead of focusing directly on unwanted phenomena and making adjustments for the special needs of different areas of application. The breakneck speed of current AI development creates a sense of urgency in the decision-making process.

The EU states that its approach to AI is based on excellence and trust, and its aim is to increase the amount of research and industrial capacity while ensuring safety and security and the protection of fundamental rights. According to the EU, we are now shaping a future that we need to secure and build with the help of concrete rules and procedures that we can all share. If the EU’s AI Act passes successfully, it will be the most extensive piece of AI regulation enacted by any democracy in the world. In fact, one of the goals set by the EU for the regulation is to also influence AI solutions outside the EU by setting an example and steering companies’ operations.

How the regulation approaches the challenges of AI

The proposed AI Act is based on a risk-based approach that would be used to weigh up any potential risks to people’s health and safety and fundamental rights. The regulation categorises the risks of AI into three categories: i) unacceptable risk, ii) high risk and iii) low or minimal risk. The regulation aims to prohibit all AI applications whose use creates a risk that would fall under the first category. This prohibited category includes AI-based social scoring applications, such as the ones used in China, and techniques that can be used to manipulate people without their knowledge.

It is likely that only relatively few AI applications would be classified as high-risk. Among them would be products regulated with product-specific legislation, such as AI systems classified as toys or medical devices. The regulation additionally has an annex listing critical areas and use cases where AI systems are classified as high-risk if they cause significant risk to the health, safety or fundamental rights rights of persons or to the environment. These critical areas include certain applications based on biometric data and biometric identification, AI systems that impact access to essential services, and systems that impact access to education or employment, to name a few.

Under the regulation, the providers of high-risk systems would have to fulfil several requirements concerning risk management, data management, technical documentation, oversight, CE certification and more. In addition to providers, the regulation even stipulates obligations for the users of high-risk systems. These obligations relate to following user instructions, managing the data that is input in a system, keeping records and ensuring transparency.

Apparently inspired by generative applications like ChatGPT and Midjourney, the European Parliament has recently proposed adding general-purpose AI technologies within the provisions of the AI Act. Under the proposal, the scope of the regulation would also cover a new category of AI, so called foundation models. The regulation stipulates new obligations for the providers of these kinds of models, such as requirements on risk analysis, data management, quality control and the functioning of the model. In addition, both foundation models and high-risk AI systems will have to be registered in an EU database introduced by the regulation. No obligations have been set for the users of foundation models.

To enforce the obligations that it sets, the regulation proposes penalties that can, depending on the situation, amount to tens of millions of euros or seven per cent of the total worldwide annual turnover of a company.

Defining AI is difficult

Even though the regulation is well-intended and founded on a real concern on the impacts of AI on the future of society, there are still many uncertainties in its practical application. One aspect that especially gave us pause was the terminology and its definitions in the regulation. For example, the definition of an AI system is formulated in a way that makes it difficult to conclude that a given computer program is not AI. This creates uncertainty on what kinds of computer programs could be classified as high-risk AI.

Another term that we find especially problematic is ‘foundation model’. The term originates in AI research literature, but there is no established definition for it. According to the draft regulation, foundation models are trained on broad data at scale, designed for generality of output, and can be adapted to a wide range of distinctive tasks. This definition does not make it easy to exclude models from the scope of the term.The formulations used in the draft AI Act have raised concern among open source developers, as it is currently very unclear what kinds of software projects would be subject to the obligations enforced with large fines.

Our belief is that it is worth keeping an eye on the effects of the terminology used in the AI Act. The text of the regulation will still likely evolve, but at the moment, there is good reason to ask whether the regulators’ attempt to prevent issues such as the malicious use of large language models amounts to throwing out the baby with the bathwater.

What company operations the AI Act could impact

We would like to encourage discourse and awareness among companies about the types of business operations the upcoming AI Act could impact. Due to its fuzzy definitions, the regulation could, at least in theory, even impact things that might not initially come to mind.

One example that came to our minds was the recruitment of new employees. If recruitment is supported by a computer program of some kind, could it be construed as high-risk AI within the meaning of the AI Act? According to the draft regulation, an AI system is high-risk if it belongs to one of the specified critical fields of application and causes a significant risk to, for example, fundamental rights. One of the critical applications mentioned in the regulation is AI systems intended for employment and workers management. We should bear in mind that also the users of such AI might find themselves subject to obligations, even if the AI was not developed in-house.

And what about open source tools, do you use them in your team? Many projects use open source programs and models shared on platforms like Hugging Face and GitHub. Based on the proposed regulation, it seems very likely that some of the software shared on these platforms could be classified as foundation models. The Parliament’s current draft regulation specifically mentions that open source models can be foundation models. In addition, the scope of the regulation is not only limited to countries in Europe; in principle, it concerns any model provider in any country if the model is available on the EU market. All this makes us wonder to what extent open source models would fall under EU regulation and who would have to bear the responsibility for fulfilling legislative obligations. It is hard to predict what consequences the regulation in its current form would have on certain very popular platforms where models and other programs are currently shared fairly freely to be used by others. When the EU General Data Protection Regulation entered into force, it caused some American websites to block access from the EU for up to several months. Google has also so far decided not to allow people living in the EU to access their Bard model competing with ChatGPT, apparently due to legislative reasons.

Authors’ thoughts

AI technology in itself is neutral in the same way that all computers are. Everything from neural networks to simple if-else commands can be used to either build up or tear down society. Right now, the challenge is exactly the fact that we cannot reliably predict all the future potential of AI. One similar moment in history was the discovery of radium. In the initial excitement, radium was applied to watches and added into toothpaste before its dangers to the human body were discovered. It would be great if we could avoid making similar mistakes with AI.

No matter what you think about AI and attempts to regulate it, the EU AI Act will probably be adopted sooner or later. People working with technology should keep a close eye on the situation – preferably with lawyers who have a good grasp on the topic – and prepare for upcoming changes.

From the perspective of AI developers, the future brings threats and opportunities alike. On the one hand, restricting the most dubious applications of technology is probably a good idea. There have already been several examples in recent years of how poorly-functioning technology can cause real harm to people who did not have a say in the use of AI. Even before ChatGPT, it has been possible to use data in a way that threatens democracy. For AI legislation to fulfil its purpose, it would have to be able to address cases like this. On the other hand, the development of technology enables a huge number of genuinely positive end uses that can, for example, bring improvements to people’s health and ability to learn. In our view, we should also fight tooth and nail to hold on to this potential of improving people’s lives and the society. In this context, it is also important to understand how much it has meant for AI research and development that, up to this point, it has been possible to share research results and models without restrictive bureaucracy and to utilise the work that others have done and shared under an open source license.

Finally, here is our tip for a great talk. Professor Natali Helberger from the University of Amsterdam, who is also one of the founders of the AI, Media & Democracy Lab and a researcher for Human(e) AI, gave an extremely interesting keynote titled Legal and Ethical Questions Around Generative AI at the Nordic Media in AI event organised this spring. You can watch the talk for free here.

__ About the authors__

We at Yle News Lab want to share information and expertise on AI in a way that is as understandable and accessible as possible, because we think it is the best way to reduce socio-digital inequality and improve everyone’s AI literacy. That is why we decided to write this piece.

Laura Isotalo - Data Scientist

Ada-Maaria Hyvärinen - Data Scientist

Tove Mylläri - Experiments & Collaborations Team Lead (Democracy & Digitalisation)

Lue tästä suomeksi