How is ChatGPT regulated by the EU AI Act: Reflections on higher education
How is ChatGPT regulated by the EU AI Act: Reflections on higher education
The AI Act, an EU regulation effective from August 2024, emphasises protecting human rights by ensuring AI transparency, safety, and ethical standards. It mandates clear labelling of AI-generated content to prevent misinformation and protect privacy. Educational institutions using AI, like ChatGPT, must comply, safeguarding students' rights and fostering ethical AI integration.
The Artificial Intelligence Act (AI Act) is a European Union (EU) regulation concerning the use of artificial intelligence (AI) within the EU, including Generative AI (GenAI). It applies to providers and developers of AI systems that are marketed or used within the EU, regardless of whether those providers or developers are established in the EU or not. It entered into force in August 2024. The specific provisions concerning a General purpose AI model (GPAI) and related penalties will start applying in August 2025. The AI Act’s main aim is to ensure that ‘AI systems respect fundamental rights, safety, and ethical principles.’ The newly created EU AI Office will oversee the implementation and enforcement of the AI Act.
The AI Act has adopted a risk-based approach, classifying AI systems according to the risk they pose to users and establishing obligations for providers and developers depending on this classification. The different risk levels will mean more or less regulation. Unacceptable risk AI systems are systems considered a threat to people and are prohibited. AI systems that negatively affect safety or fundamental rights are considered high risk and are subject to a long list of obligations. Limited-risk AI systems have specific transparency obligations.
Initially proposed in 2021, the legislative process faced a notable disruption when OpenAI’s ChatGPT was launched on 30 November 2022. This required a change to the draft text, creating specific regulations for generative AI. In its final version, the AI Act also imposes transparency obligations and certain restrictions on the use of general-purpose AI models such as GenAI.
How is GenAI defined in the AI Act?
Article 3(63) of the AI Act defines a General purpose AI model (GPAI), also known as a foundation model, as an AI model that, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications ). This does not cover AI models that are used before release on the market for research, development and prototyping activities (Article 3(63)). A prominent example of GPAI models are the so-called Large Language Models (LLMs).
Notably, GPAI models may be further modified or fine-tuned into new models. They do not constitute AI systems on their own. They require the addition of further components, such as a user interface, to become AI systems. AI models are typically integrated into and form part of AI systems. When a GPAI model is integrated into an AI system, which is then placed on the market, the model is considered as placed on the market. Therefore, the obligations of the AI Act for models apply in addition to those for AI systems.
A GPAI system is defined as an AI system based on a general-purpose AI model that can serve a variety of purposes, both for direct use and integration into other AI systems (Article 3(66) of the AI Act). GPAI systems may be used as high-risk AI systems or integrated as components of high-risk AI systems. To enable the latter’s compliance and ensure a fair sharing of responsibilities along the AI value chain, GPAI system providers should cooperate with such high-risk AI system providers.
One subset of GPAI systems is represented by systems that can create, generate, or develop images, text, codes, and so on. These systems are referred to as GenAI. ChatGPT is an example of a generative AI system.
How are GPAIs classified in the AI Act?
The AI Act classifies GPAI models into two groups: GPAI without systemic risk and GPAI with systemic risk. A GPAI model is considered to have systemic risk if one of the following two criteria is fulfilled:
- it has high-impact capabilities, which are presumed when the cumulative amount of computing used for its training measured in floating point operations (FLOPs) is greater than 10^25 (FLOPs, or Floating-Point Operations Per Second, measure a computer's processing speed), or
- if the European Commission decides so, taking into account, for example, the number of parameters, quality and size of the dataset, input and output modalities or the reach measures in business users.
Obligations for GPAI providers
Obligations for GPAI providers are set out in Article 53 of the AI Act. All GPAI model providers must:
- provide technical documentation of the model;
- provide instructions for use, and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems (also known as downstream providers);
- comply with the EU laws on copyright (e.g. watermarking); and
- publish a summary about the content used for training.
Providers located outside the EU who want to place a GPAI model on the EU market must appoint a representative in the Union (Article 54 (1) of the AI Act). Free and open-source license GPAI model providers only need to comply with copyright law and publish a summary of their training data, unless they present a systemic risk.
In addition, all providers of GPAI models that present a systemic risk must also (Article 55 of the AI Act):
- conduct model evaluations, including adversarial testing to identify and mitigate systemic risk;
- assess and mitigate possible systemic risks at Union level;
- track and report serious incidents and possible corrective measures to address them to the AI Office and relevant national competent authorities;
- ensure cybersecurity protection.
The liability chain
Anyone that modifies the intended purpose of an GPAI system, including a GPAI system, which has not been classified as high-risk and has already been placed on the market or put into service in such a way that the AI system concerned becomes a high-risk AI system in accordance with Article 6 of the EU Act will be considered to be a provider of a high-risk AI system and will be subject to the obligations of the provider under Article 16 of the AI Act (Article 25(1)(c) of the AI Act).
The initial provider of such a system should closely cooperate with new providers and should make available the necessary information and provide the reasonably expected technical access and other assistance that is required for the fulfilment of the obligations of the AI Act, in particular regarding the compliance with the conformity assessment of high-risk AI systems (Article 25(2) of the AI Act).
Therefore, in establishing the liability chain, it is important to know the role of each provider and clearly determine the start and end point of their contribution to the AI system. This might not always be easy to determine. Think of, for example, cases when a GPAI model, before being integrated into an AI system, is modified by, for example, three other providers, two of which have not communicated these modifications to the initial provider. In this case, the initial provider would not be aware of the changes made in the model. Hence, the initial provider could not be made responsible for any risks that the model would have because of these changes.
What does the AI Act imply for ChatGPT
ChatGPT does not classify as high-risk under the AI Act. It will have to comply with transparency requirements and copyright law, including:
- Providing the technical documentation of the model.
- Making available information and documentation to providersof AI systems who intend to integrate chatGPT in their AI system.
- Disclosing that the content was generated by AI.
- Designing the model to prevent it from generating illegal content.
- Publishing summaries of copyrighted data used for training.
The more advanced models, however, namely GPT4 and GPT4o, might be considered as GPAI with systemic risk. As such, they will have to undergo thorough evaluations, and any serious incidents will have to be reported to the European Commission. Importantly, content that is either generated or modified with the help of AI - images, audio, or video files (for example, deep fakes) - needs to be clearly labelled as AI-generated so that users are aware when they come across such content. This transparency requirement ensures that individuals have the right to accurate and truthful information and helps in preventing misinformation. Additionally, labelling AI-generated content protects individuals' privacy, especially in connection to deep-fakes. Moreover, deep fakes and other manipulated content can be used to defame or falsely incriminate individuals. Clear labelling can prevent such misuse, thus protecting individuals' reputations, dignity, and their right to a fair trial. Lastly, these transparency requirements have an impact also on the freedom of expression. While AI tools can enhance creativity, transparency in their use ensures that the integrity of genuine human expression is maintained.
Recital 101 of the AI Act acknowledges the particular role and responsibility of providers of GPAI models along the AI value chain. It recognises that the models they provide may form the basis for a range of downstream systems, often provided by downstream providers (namely those who integrate a third-party or open-source GPAI model into their own AI systems), which necessitate a good understanding of the models and their capabilities, both to enable the integration of such models into their products and to fulfil their legal obligations. Therefore, proportionate transparency measures are set in place.
In addition, the AI Act interacts with other EU laws such as the General Data Protection Regulation (GDPR) and copyright law, and has some consumer law characteristics. This means that ChatGPT, and other GPAI, will have to comply with these laws too, to ensure compliance with the AI Act. It must be also highlighted that complying with the AI Act is not just about meeting requirements but also about positioning yourself at the forefront of responsible AI development, reflecting your commitment to human rights, ethical standards and safety, which is the main goal of the AI Act.
What does the EU AI Act imply for universities using ChatGPT?
Several European universities have decided to embrace ChatGPT, either by allowing its use or by developing their own model, similar to ChatGPT developed by Open AI. HEIs can even use ChatGPT for their administrative services including as a chatbot that students can use to quickly get answers to their questions.
To determine the role and consequently the obligations of universities under the AI Act in such cases, it is worth analysing the definition of ‘deployer’. A deployer is defined in Article 3(4) of the AI Act as
any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal, non-professional activity.
Given this definition, a university using a GPAI system under its authority to perform its functions (e.g., for educational, administrative, or research purposes) would fall under the definition of a deployer, as it is using the AI system within its organisational capacity and not for personal, non-professional activity.
In summary, a university integrating ChatGPT into their own AI system would primarily be considered a ‘deployer’ if they are using the system internally under their authority. If they then offer this integrated AI system to others, they could also be considered a ‘downstream provider’. Thus, the specific classification depends on the nature of the university's use and distribution of the AI system.
Notably, even if a university is using the AI system (integrating ChatGPT) under its authority for providing a chatbot to its students (who are part of the university), this use is still considered internal and professional. The students are part of the university's community and the chatbot is used to support their educational activities and administrative issues. Thus, even when the university provides the integrated AI chatbot to its students for use, it is still functioning within its own authority and capacity as an educational institution. Therefore, the university would be classified as a ‘deployer’ and would have to comply with the deployer’s obligations under the AI Act.
When dealing with the challenge of regulating ChatGPT’s use in higher education and uncertainties associated with ChatGPTs rapid evolution and its impact on the education system, fostering collaboration and knowledge-sharing among higher education institutions is crucial. Moreover, educational institutions need to take the lead in ChatGPT regulation about its use in the area by closely collaborating with Open AI in developing and adopting models that are in compliance with the needs and requirements of higher education in terms of ethical, social and legal concerns, including in relation to human rights.
Written by Desara Dushi
Dr. Desara Dushi is a senior postdoctoral researcher at the Law, Science, Technology & Society Research Group (LSTS), Vrije Universiteit Brussel. She holds a double PhD degree in Law, Science and Technology from University of Bologna and University of Luxembourg. She was one of the policy analysts of the 6th edition of the GC Policy Observatory.
Cite as: Dushi, Desara. "How is ChatGPT regulated by the EU AI Act: Reflections on higher education", GC Human Rights Preparedness, 5 September 2024, https://gchumanrights.org/gc-preparedness/preparedness-economic-social-and-cultural-rights/article-detail/how-is-chatgpt-regulated-by-the-eu-ai-act-reflections-on-higher-education.html
- #AI
 
- #AIAct
 
- #Compliance
 
- #Education
 
- #HumanRights
 
- #Transparency
 
Add a Comment
Disclaimer
This site is not intended to convey legal advice. Responsibility for opinions expressed in submissions published on this website rests solely with the author(s). Publication does not constitute endorsement by the Global Campus of Human Rights.
CC-BY-NC-ND. All content of this initiative is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Go back to Blog
Original Page: http://gchumanrights.org/gc-preparedness/preparedness-economic-social-and-cultural-rights/article-detail/how-is-chatgpt-regulated-by-the-eu-ai-act-reflections-on-higher-education.html