- Purpose and scope of application
- Risk categories of the AI systems
- Limited and minimal risk AI systems
- High-risk AI systems
- Prohibited artificial intelligence practices
- Conformity assessment of high-risk AI systems
- Registration of high-risk AI systems
- Sub-Services for Responsible AI Licenses
- Framework for notified bodies
- Penalties
- Official Sources & Primary Legislation
Obtaining crypto licenses, white label consulting,
ICO/STO, supporting NFT marketplaces, drafting policies
for crypto projects, DAOs, and gamify projects
Purpose and scope of application
The EU Artificial Intelligence Act (AI Act) establishes a harmonised legal framework governing the development, placing on the market, putting into service, and use of artificial intelligence systems within the European Union. The objectives and scope of application of the Regulation are primarily set out in Article 1 and Article 2 of Regulation (EU) 2024/1689.
Pursuant to Article 1(1) of the AI Act, the Regulation aims to improve the functioning of the internal market by laying down uniform rules for AI systems, while ensuring a high level of protection of health, safety, and fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union. The Regulation explicitly seeks to promote human-centric and trustworthy artificial intelligence, support innovation, and prevent fragmentation of national AI rules across Member States.
The legal basis of the AI Act relies primarily on Article 114 of the Treaty on the Functioning of the European Union (TFEU), allowing the EU to adopt harmonised rules for the internal market, and, in specific cases relating to biometric data and law enforcement, on Article 16 TFEU, which concerns the protection of personal data. This dual legal basis reflects the close interaction between AI regulation, product safety law, and data protection law.
The material scope of the AI Act is defined in Article 2(1) and covers all artificial intelligence systems as defined in Article 3(1) of the Regulation. The definition of an AI system is intentionally broad and technology-neutral, encompassing machine-based systems designed to operate with varying levels of autonomy and capable of generating outputs such as predictions, content, recommendations, or decisions that influence physical or virtual environments.
From a territorial perspective, the AI Act has a clear extraterritorial effect. In accordance with Article 2(1)(a)-(c), the Regulation applies not only to providers and deployers established within the EU, but also to entities established in third countries where an AI system is placed on the EU market, put into service in the EU, or where the output produced by the AI system is used within the Union. This approach mirrors the regulatory logic of the GDPR and is designed to prevent circumvention of EU AI rules through offshore deployment models.
At the same time, the AI Act establishes a number of explicit exclusions. Under Article 2(3), the Regulation does not apply to AI systems developed or used exclusively for military, defence, or national security purposes. In addition, Article 2(6) excludes AI systems developed and used solely for scientific research and development, provided they are not placed on the market or put into service. AI systems used for purely personal and non-professional activities are also excluded from the scope of the Regulation.
The AI Act is designed as a horizontal regulatory instrument and applies without prejudice to existing EU sector-specific legislation. As clarified in Article 2(7), the Regulation complements, rather than replaces, existing EU legal frameworks such as the General Data Protection Regulation (GDPR), EU consumer protection law, labour law, and product safety legislation. Where AI systems qualify as safety components of regulated products, compliance with the AI Act is required in addition to sectoral conformity requirements.
Overall, the purpose and scope provisions of the AI Act establish the legal foundation for its core regulatory mechanism: a risk-based approach to artificial intelligence regulation, under which AI systems are classified and regulated according to the level of risk they pose to public interests and fundamental rights. This risk-based structure underpins all subsequent obligations under the Regulation and determines whether an AI system is prohibited, subject to strict compliance requirements, or largely exempt from regulatory oversight.
Risk categories of the AI systems
The regulatory architecture of the AI Act is built around a risk-based classification of artificial intelligence systems, which determines the extent of regulatory obligations applicable to a given AI use case. This approach is expressly established in Article 1(2) and further operationalised through Articles 5, 6 and 50 of Regulation (EU) 2024/1689.
Rather than regulating artificial intelligence as a single homogeneous category, the AI Act differentiates AI systems according to the severity and probability of harm they may cause to public interests protected under EU law, in particular health, safety, and fundamental rights. This structure reflects the EU legislator’s intention to ensure proportionality, legal certainty, and effective enforcement, while avoiding unnecessary constraints on low-risk and innovation-driven AI technologies.
Under the AI Act, AI systems are classified into four principal risk categories:
- unacceptable risk AI systems,
- high-risk AI systems,
- limited-risk AI systems,
- minimal or no-risk AI systems.
Each category is subject to a distinct regulatory treatment, ranging from a complete prohibition to the absence of binding obligations.
AI systems that pose an unacceptable risk are addressed in Article 5 of the Regulation. These systems are considered incompatible with EU values and fundamental rights and are therefore prohibited from being placed on the market, put into service, or used within the Union. The prohibition is absolute and reflects the legislator’s assessment that certain AI practices generate risks that cannot be sufficiently mitigated through technical, organisational, or human oversight measures.
The second and most operationally significant category is that of high-risk AI systems, defined in Article 6 of the AI Act and further specified in Annex III. High-risk AI systems are not prohibited; however, they are subject to extensive ex ante conformity assessments and ongoing compliance obligations. The classification of an AI system as high-risk depends on its intended purpose and the context of its use, rather than solely on the underlying technology. In particular, AI systems used as safety components of regulated products or deployed in predefined sensitive areas affecting fundamental rights are presumed to fall within this category.
Between prohibited and high-risk systems, the AI Act identifies limited-risk AI systems, which are regulated primarily through transparency obligations set out in Article 50. These systems typically involve direct interaction with natural persons or the generation of synthetic content, where the primary regulatory concern is the protection of user autonomy and informed decision-making rather than systemic harm.
Finally, the AI Act recognises a broad category of minimal or no-risk AI systems, which are not subject to mandatory regulatory requirements. As clarified in Recital 27, the Regulation deliberately avoids imposing obligations on these systems in order to foster innovation and ensure the free circulation of AI-based goods and services within the internal market. Providers of such systems may nevertheless voluntarily adhere to codes of conduct and best practices.
A key feature of the AI Act’s risk classification framework is its functional and dynamic nature. The classification of an AI system depends on how it is used in practice, and the same system may fall under different risk categories depending on its deployment context. Moreover, the European Commission is empowered to amend the list of high-risk use cases through delegated acts, allowing the framework to evolve alongside technological and societal developments.
From a compliance perspective, the correct classification of an AI system under the AI Act constitutes a foundational legal assessment. An incorrect risk categorisation may result in the application of inadequate compliance measures and expose providers and deployers to enforcement actions and administrative penalties. Consequently, risk classification under the AI Act is increasingly treated as an integral part of AI governance, product design, and regulatory strategy for companies operating in or targeting the EU market.
Limited and minimal risk AI systems
The AI Act establishes a differentiated regulatory treatment for AI systems that are assessed as posing limited risk or minimal (no) risk to health, safety, and fundamental rights. This approach reflects the principle of proportionality set out in Article 1(2) and Recital 27 of Regulation (EU) 2024/1689, according to which regulatory intervention should be commensurate with the level of risk generated by an AI system.
AI systems classified as limited-risk AI systems are not subject to the extensive compliance regime applicable to high-risk AI systems. Instead, they are regulated primarily through transparency obligations, which are laid down in Article 50 of the AI Act. The underlying regulatory concern in this category is not systemic harm, but rather the protection of user autonomy, awareness, and informed decision-making when individuals interact with AI systems or are exposed to AI-generated content.
Pursuant to Article 50(1), deployers of AI systems intended to interact directly with natural persons must ensure that individuals are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation applies, for example, to conversational AI systems and virtual assistants that may otherwise create a misleading impression of human interaction.
In addition, Article 50(2) and (3) impose specific transparency requirements for AI systems that generate or manipulate content, including audio, image, video, or text. Where such content is artificially generated or altered in a manner that may mislead individuals, it must be clearly disclosed as AI-generated, subject to narrowly defined exceptions, including law enforcement activities and the exercise of freedom of expression in artistic or journalistic contexts.
The AI Act also addresses emotion recognition and biometric categorisation systems outside high-risk contexts. While certain uses of such systems are prohibited or classified as high-risk, their deployment in other, non-sensitive contexts may fall within the limited-risk category and trigger transparency obligations rather than outright bans or conformity assessments.
By contrast, minimal or no-risk AI systems are explicitly excluded from binding regulatory obligations under the AI Act. As explained in Recital 27, these systems are considered to present negligible risk to individuals or society and therefore do not justify mandatory compliance requirements. This category encompasses the majority of AI applications currently deployed in commercial and consumer environments, including internal business optimisation tools, recommendation engines for non-sensitive content, and AI systems embedded in entertainment or productivity software.
Although minimal-risk AI systems are not regulated, the AI Act encourages providers to voluntarily adopt codes of conduct and internal governance measures, as envisaged in Articles 95 and 96. These voluntary instruments are intended to promote responsible AI development and use, without imposing legally binding obligations that could hinder innovation or market entry.
From a legal and compliance standpoint, the distinction between limited-risk and minimal-risk AI systems remains significant. While minimal-risk systems fall entirely outside the scope of enforceable obligations, limited-risk systems are subject to targeted transparency rules, non-compliance with which may result in administrative penalties under the AI Act. As a result, even AI systems that do not qualify as high-risk require a structured legal assessment to determine whether transparency obligations apply.
In practice, the classification of AI systems as limited-risk or minimal-risk constitutes an important element of AI governance and compliance frameworks, particularly for companies offering AI-enabled services to EU users. Correct classification ensures regulatory alignment while preserving the flexibility necessary for technological development and innovation.
High-risk AI systems
The category of high-risk AI systems constitutes the core of the regulatory framework established by the AI Act. While such systems are not prohibited, they are considered capable of posing significant risks to health, safety, and fundamental rights, and are therefore subject to strict ex ante and ongoing compliance obligations.
The legal basis for the classification of high-risk AI systems is set out in Article 6 of Regulation (EU) 2024/1689, read in conjunction with Annex III of the AI Act.
Under Article 6(1), an AI system is classified as high-risk where it is intended to be used as a safety component of a product, or where it is itself a product, covered by existing EU harmonisation legislation listed in Annex I, and where that product is subject to a third-party conformity assessment under the relevant sectoral legislation. This includes, among others, AI systems used in medical devices, machinery, aviation, automotive systems, and other regulated products where safety considerations are paramount.
In addition to product-related use cases, Article 6(2) classifies certain stand-alone AI systems as high-risk where, in light of their intended purpose, they pose a high risk to fundamental rights and are deployed in specific sensitive areas enumerated in Annex III. These areas include, inter alia, biometric identification and categorisation, management of critical infrastructure, education and vocational training, employment and worker management, access to essential public and private services, law enforcement, migration and border control, and the administration of justice.
The classification as high-risk is functional rather than purely technical. As clarified in Recitals 52 and 53, the decisive factor is whether the AI system materially influences decision-making processes that may affect protected legal interests. AI systems that perform narrowly defined, purely auxiliary or procedural tasks and do not materially influence outcomes may, in certain circumstances, fall outside the high-risk classification, even if deployed in a listed sector.
Once an AI system qualifies as high-risk, the provider becomes subject to the comprehensive compliance regime set out in Chapter III, Section 2 of the AI Act (Articles 8 to 15). These provisions establish mandatory requirements relating to risk management, data governance, technical documentation, record-keeping, transparency, human oversight, and accuracy, robustness, and cybersecurity. Compliance with these requirements must be demonstrated prior to placing the system on the market or putting it into service.
Importantly, the classification of an AI system as high-risk under the AI Act does not automatically imply that the underlying product is classified as high-risk under sector-specific EU legislation, nor does it replace existing conformity assessment regimes. As clarified in Recitals 51 and 52, the AI Act operates in parallel with sectoral product safety frameworks, and providers must ensure compliance with all applicable legal instruments.
From a regulatory and strategic perspective, high-risk AI systems represent the highest compliance burden under the AI Act short of outright prohibition. Correct identification of high-risk status is therefore critical, as misclassification may result in the placing on the market of non-compliant AI systems and expose providers and deployers to significant enforcement measures and administrative fines.
Prohibited artificial intelligence practices
The AI Act establishes a category of artificial intelligence practices that are deemed to pose an unacceptable risk to fundamental rights and public interests protected under EU law. Such practices are considered inherently incompatible with Union values and are therefore strictly prohibited, irrespective of any potential technical safeguards or mitigation measures.
The legal basis for these prohibitions is set out in Article 5 of Regulation (EU) 2024/1689, read together with Recitals 28 to 45 of the AI Act.
Pursuant to Article 5(1), prohibited AI practices may not be placed on the market, put into service, or used within the European Union under any circumstances, except where the Regulation itself explicitly provides for narrowly defined derogations. The prohibition applies equally to providers and deployers, regardless of whether they are established within the EU or in a third country, insofar as the AI system falls within the territorial scope of the Regulation.
One of the central categories of prohibited practices concerns AI systems that deploy subliminal, manipulative, or deceptive techniques with the objective or effect of materially distorting human behaviour in a manner that causes, or is reasonably likely to cause, significant harm. As clarified in Article 5(1)(a) and Recital 29, such harm may be physical, psychological, or economic in nature. The decisive element is not the provider’s intent, but the objective capability of the AI system to undermine individual autonomy and free decision-making in a harmful manner.
Closely related are AI systems that exploit vulnerabilities of specific groups of persons, including children, persons with disabilities, or individuals in a particularly vulnerable social or economic situation. Under Article 5(1)(b), AI systems that take advantage of such vulnerabilities in order to materially distort behaviour and cause harm are prohibited. This provision reflects the EU legislator’s heightened concern for groups that require special protection under Union law.
The AI Act also prohibits social scoring systems, as set out in Article 5(1)(c). These are AI systems that evaluate or classify natural persons based on their social behaviour or personal characteristics in a manner that leads to detrimental or unfavourable treatment in social contexts unrelated to the original data collection. The prohibition applies to both public authorities and private actors and aims to prevent discriminatory outcomes and systemic exclusion incompatible with human dignity and equality before the law.
A further set of prohibitions concerns certain biometric practices, which are regarded as particularly intrusive. Under Article 5(1)(d), AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage are prohibited. This practice is considered to contribute to mass surveillance and to pose a serious threat to the right to privacy and data protection.
In addition, the AI Act introduces strict limitations on the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes. As provided in Article 5(1)(g), such systems are generally prohibited, subject only to exhaustively listed and narrowly constructed exceptions related to the prevention of serious crime, the protection of life, or the identification of suspects in particularly serious criminal offences. Even where an exception applies, the use of such systems is subject to stringent procedural safeguards, prior authorisation, and fundamental rights impact assessments.
The prohibitions established in Article 5 operate as lex specialis in relation to other areas of EU law, including data protection and consumer protection legislation. As clarified in Recital 38, these rules complement existing prohibitions under the GDPR and other Union instruments, without diminishing the application of parallel legal frameworks.
From a compliance perspective, prohibited AI practices represent a non-negotiable red line under the AI Act. Unlike high-risk AI systems, which may be lawfully placed on the market subject to compliance, AI systems falling within Article 5 must be discontinued, redesigned, or fundamentally repurposed. Failure to comply with these prohibitions exposes providers and deployers to the most severe administrative fines under the AI Act’s enforcement regime.
Conformity assessment of high-risk AI systems
High-risk AI systems may be placed on the market or put into service within the European Union only if they have successfully undergone a conformity assessment demonstrating compliance with the mandatory requirements laid down in the AI Act. The conformity assessment framework is established in Chapter III, Section 4 (Articles 16 to 43) of Regulation (EU) 2024/1689.
Pursuant to Article 16, the primary responsibility for ensuring conformity rests with the provider of the high-risk AI system. Before placing a system on the market or putting it into service, the provider must verify and document that the AI system complies with all applicable requirements set out in Articles 8 to 15, including risk management, data governance, technical documentation, record-keeping, transparency, human oversight, and robustness, accuracy, and cybersecurity.
The form of conformity assessment applicable to a high-risk AI system depends on whether the system is subject to existing EU harmonisation legislation listed in Annex I. Where a high-risk AI system is a safety component of a product, or is itself a product, that already requires third-party conformity assessment under sectoral EU law, the conformity assessment under the AI Act is integrated into that existing procedure, in accordance with Article 43(1). In such cases, the notified body designated under the relevant sectoral legislation also assesses compliance with the AI Act requirements.
For stand-alone high-risk AI systems that are not covered by sectoral product legislation requiring third-party assessment, the AI Act provides for a self-assessment procedure by the provider, as set out in Article 43(2). Under this procedure, the provider must establish comprehensive technical documentation and perform an internal conformity assessment demonstrating compliance with the AI Act. This assessment must be completed before the system is placed on the market or put into service.
However, the availability of self-assessment is not absolute. Where harmonised standards or common specifications are not fully applied, or where the AI system falls within certain categories specified by the Regulation, involvement of a notified body may still be required. The European Commission is empowered to further specify these cases through implementing acts, ensuring consistent application across Member States.
Upon successful completion of the conformity assessment, the provider must draw up an EU declaration of conformity in accordance with Article 47 and affix the CE marking to the high-risk AI system, as provided in Article 48. The CE marking indicates conformity not only with the AI Act, but with all other applicable Union harmonisation legislation.
Conformity assessment under the AI Act is not a one-time obligation. Providers are required to maintain compliance throughout the lifecycle of the AI system. Where a high-risk AI system is substantially modified, reassessment is required to determine whether the modification affects compliance with the AI Act, as clarified in Article 23. In such cases, the modified system may be treated as a new high-risk AI system for the purposes of conformity assessment.
From a regulatory perspective, the conformity assessment regime under the AI Act mirrors the logic of EU product safety and financial services regulation, emphasising ex ante compliance, documented accountability, and traceability. For providers of high-risk AI systems, conformity assessment represents one of the most resource-intensive elements of AI Act compliance and requires close coordination between legal, technical, and governance functions.
Registration of high-risk AI systems
In addition to conformity assessment and CE marking, the AI Act introduces a mandatory registration requirement for certain high-risk AI systems as an additional layer of regulatory oversight and transparency. The legal framework governing registration is set out in Article 49 of Regulation (EU) 2024/1689.
Pursuant to Article 49, providers of high-risk AI systems that are subject to third-party conformity assessment are required to register those systems in a central EU database before placing them on the market or putting them into service. This obligation applies irrespective of whether the provider is established within the European Union or in a third country, provided that the AI system falls within the scope of the AI Act.
The registration requirement serves several regulatory objectives. First, it enables market surveillance authorities and other competent bodies to identify and monitor high-risk AI systems deployed across the Union. Second, it enhances public transparency, allowing relevant stakeholders to understand where and how high-risk AI systems are being used. Third, it supports enforcement by creating a traceable link between conformity assessment, CE marking, and post-market monitoring obligations.
The information to be registered is specified in Article 49 and includes, inter alia, the identity of the provider, a description of the AI system and its intended purpose, references to the applicable conformity assessment procedure, and details of the notified body involved, where relevant. The registered information must be kept accurate and up to date for as long as the AI system remains on the market or in service.
The AI Act distinguishes between provider registration and system registration. While providers must ensure that high-risk AI systems subject to third-party conformity assessment are registered, deployers of high-risk AI systems used by public authorities may also be subject to registration obligations, particularly where such systems are used in sensitive areas affecting fundamental rights. This distinction reflects the shared responsibility model underpinning the AI Act.
The EU database for high-risk AI systems is established and maintained by the European Commission, as provided in Article 49. Access to the database is tiered, ensuring that confidential or security-sensitive information is protected, while still allowing competent authorities and, where appropriate, the public to access relevant data. The Commission is empowered to adopt implementing acts specifying the technical and operational aspects of the database.
Failure to comply with registration obligations constitutes a breach of the AI Act and may trigger enforcement measures and administrative fines under the penalty regime set out in Chapter XII. Registration is therefore not a mere formality, but a substantive compliance requirement closely linked to conformity assessment and market surveillance.
In practical terms, the registration of high-risk AI systems should be treated as an integral step in the AI system lifecycle, following conformity assessment and preceding market deployment. Providers must ensure that internal compliance processes, documentation workflows, and governance structures are aligned to meet registration obligations in a timely and accurate manner.
Sub-Services for Responsible AI Licenses
While the AI Act does not introduce a standalone “AI license” as a single authorisation instrument, it establishes a framework of regulatory obligations and compliance functions that, in practice, give rise to a set of sub-services for responsible AI deployment and governance. These sub-services are particularly relevant for providers and deployers of high-risk AI systems, as well as for entities seeking to demonstrate structured and ongoing compliance with the AI Act.
The legal basis for these sub-services is distributed across Chapter III, Chapter IV and Chapter IX of Regulation (EU) 2024/1689. Together, these provisions require providers and deployers to establish internal organisational, technical, and procedural arrangements that effectively function as components of a responsible AI compliance framework.
In practice, responsible AI sub-services typically include AI risk management and governance functions, as required under Article 9, which obliges providers of high-risk AI systems to implement a continuous risk management system throughout the lifecycle of the AI system. This function encompasses risk identification, evaluation, mitigation measures, and periodic reassessment, and is closely aligned with broader enterprise risk management and compliance frameworks.
Another core sub-service relates to data governance and data quality management, as mandated by Article 10. Providers must ensure that training, validation, and testing datasets meet strict standards of relevance, representativeness, completeness, and absence of bias, to the extent technically feasible. In operational terms, this often requires the establishment of dedicated data governance policies, audit trails, and documentation processes, which may be supported by specialised internal teams or external compliance service providers.
The AI Act also implicitly gives rise to sub-services focused on technical documentation, record-keeping, and traceability, as required under Articles 11 and 12. These obligations require providers to maintain detailed documentation enabling regulators and notified bodies to assess compliance. In practice, this function resembles regulatory reporting and documentation services commonly found in financial services and product compliance regimes.
Human oversight mechanisms form another essential component of responsible AI sub-services. Under Article 14, providers must design AI systems in a manner that enables effective human oversight, including the ability to interpret outputs, intervene where necessary, and prevent or minimise risks. This often translates into governance structures, training programmes, and escalation procedures that support responsible deployment and use of AI systems.
Finally, post-market monitoring and incident management services play a critical role in responsible AI compliance. Articles 72 to 76 require providers to implement post-market monitoring systems, report serious incidents and malfunctioning, and cooperate with market surveillance authorities. These obligations effectively create an ongoing compliance function similar to post-authorisation supervision in other regulated sectors.
Although the AI Act does not formally label these activities as “licensed services”, in practice they form a de facto compliance ecosystem for responsible AI. Providers and deployers increasingly rely on internal compliance units, external legal and technical advisors, and specialised AI governance service providers to fulfil these obligations in a structured and scalable manner.
Framework for notified bodies
The AI Act establishes a dedicated framework for notified bodies, which play a central role in the conformity assessment and supervisory architecture applicable to certain high-risk AI systems. The legal basis for notified bodies is set out in Chapter III, Section 4 (Articles 30 to 44) of Regulation (EU) 2024/1689.
Notified bodies are independent conformity assessment bodies designated by EU Member States and notified to the European Commission. Their primary function under the AI Act is to assess whether high-risk AI systems subject to third-party conformity assessment comply with the mandatory requirements laid down in Chapter III of the Regulation. This role closely mirrors the function of notified bodies under existing EU product safety and conformity regimes, ensuring regulatory continuity and coherence.
Pursuant to Article 30, Member States are responsible for designating notified bodies in accordance with strict criteria relating to independence, impartiality, technical competence, and organisational capacity. Notified bodies must demonstrate sufficient expertise in artificial intelligence technologies, data governance, cybersecurity, and fundamental rights protection, as well as an in-depth understanding of the regulatory requirements of the AI Act.
Once designated, notified bodies are subject to ongoing supervision by national notifying authorities. Under Article 31, Member States must monitor notified bodies on a continuous basis to ensure that they maintain compliance with designation requirements. Where deficiencies are identified, corrective measures, suspension, or withdrawal of designation may be imposed.
The operational obligations of notified bodies are further specified in Articles 32 to 35, which require them to conduct conformity assessments with due professional diligence, ensure confidentiality of sensitive information, and avoid conflicts of interest. Notified bodies must also maintain appropriate internal procedures for handling complaints, appeals, and requests for review related to conformity assessment decisions.
The AI Act introduces additional safeguards to ensure consistency and trust in notified bodies’ activities. Under Article 33, notified bodies are required to cooperate with each other and with market surveillance authorities, including through the exchange of information and participation in coordination activities at EU level. This cooperation is intended to prevent divergent interpretations of AI Act requirements and to promote harmonised enforcement across the internal market.
Importantly, the AI Act reinforces accountability by imposing liability and transparency obligations on notified bodies. They may be held responsible for failures in conformity assessment that result from negligence or lack of due diligence, and they are required to document and justify their assessment decisions in a manner that allows effective regulatory oversight.
From a practical standpoint, the involvement of a notified body significantly impacts the timeline, cost, and complexity of placing a high-risk AI system on the EU market. Providers must therefore carefully assess at an early stage whether their AI systems fall within categories requiring third-party assessment and plan their compliance strategy accordingly.
Penalties
The AI Act establishes a robust and dissuasive penalties regime designed to ensure effective enforcement of its provisions across the European Union. The legal framework governing administrative fines and other corrective measures is set out in Chapter XII (Articles 99 to 101) of Regulation (EU) 2024/1689.
Pursuant to Article 99, Member States are required to lay down rules on penalties applicable to infringements of the AI Act and to take all measures necessary to ensure that those penalties are implemented effectively. While enforcement remains primarily at national level, the Regulation establishes harmonised maximum thresholds for administrative fines in order to ensure consistency and proportionality across the internal market.
The AI Act adopts a tiered approach to administrative fines, reflecting the severity of the infringement and the potential impact on public interests and fundamental rights. The most serious infringements, including the placing on the market, putting into service, or use of prohibited AI practices under Article 5, are subject to the highest level of sanctions. For such violations, administrative fines may reach up to EUR 35 million or up to 7% of the total worldwide annual turnover of the preceding financial year, whichever is higher, as provided in Article 99(3).
Infringements of obligations relating to high-risk AI systems, including failures to comply with the mandatory requirements set out in Articles 8 to 15, conformity assessment obligations, or registration requirements, are subject to lower, but still significant, administrative fines. Under Article 99(4), such infringements may result in fines of up to EUR 15 million or up to 3% of global annual turnover, whichever is higher.
Less severe infringements, such as the provision of incorrect, incomplete, or misleading information to notified bodies or competent authorities, are addressed under Article 99(5). These violations may result in administrative fines of up to EUR 7.5 million or up to 1% of global annual turnover.
In determining the appropriate level of administrative fines, competent authorities must take into account a range of factors, including the nature, gravity, and duration of the infringement, the degree of responsibility of the infringing party, any previous infringements, and the level of cooperation with supervisory authorities. This assessment-based approach aligns the AI Act with existing EU enforcement frameworks, including those under the GDPR.
In addition to administrative fines, the AI Act empowers competent authorities to impose corrective measures, such as requiring providers or deployers to bring AI systems into compliance, withdraw non-compliant systems from the market, or suspend their use. These measures may be applied independently of, or in combination with, administrative fines, depending on the circumstances of the case.
Importantly, the AI Act also recognises the need for proportionality, particularly in relation to small and medium-sized enterprises (SMEs) and start-ups. As provided in Article 99(7), authorities must consider the size and economic capacity of the infringing entity when imposing penalties, without undermining the effectiveness and deterrent effect of enforcement.
The penalties regime under the AI Act underscores the EU’s commitment to ensuring that artificial intelligence is developed and deployed in a manner that respects fundamental rights and public trust. For providers and deployers of AI systems, the potential financial and reputational consequences of non-compliance make early and structured AI Act compliance a critical element of regulatory and business strategy.
Official Sources & Primary Legislation
The preparation of this article is based exclusively on official EU legislative acts, regulatory guidance, and primary sources governing the regulation of artificial intelligence within the European Union.
- Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) – https://eur-lex.europa.eu/eli/reg/2024/1689/oj
- Charter of Fundamental Rights of the European Union – https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:12012P/TXT
- Treaty on the Functioning of the European Union (TFEU) – https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:12012E/TXT
- General Data Protection Regulation (GDPR) – Regulation (EU) 2016/679 – https://eur-lex.europa.eu/eli/reg/2016/679/oj
- Product Safety and Conformity Framework (CE marking & notified bodies) – https://single-market-economy.ec.europa.eu/single-market/ce-marking_en
- Market surveillance and enforcement framework under EU harmonisation legislation – https://single-market-economy.ec.europa.eu/single-market/goods/building-blocks/market-surveillance_en
Get in touch with us

Daria Lysenko
Senior lawyer

Valeriia Kozel
Customer manager
Customer reviews