Of interest.

The second wave of obligations under the AI Act is coming

The implementation of Regulation (EU) 2024/1689 (the AI Act) is entering its next phase. As of 2 August 2025, the second wave of obligations under the EU’s artificial intelligence (AI) regulation will take effect. This phase will significantly impact technology companies, AI system providers, regulatory authorities, and Member States.

We already covered the first wave of obligations under the AI Act, which marked the beginning of AI regulation within the European Union, in our previous article: The First Wave of Obligations under the AI Act Has Entered into Force.

The second wave of obligations will primarily concern General Purpose Artificial Intelligence models (hereinafter as “GPAI models”) and the transparency of their use, notified conformity assessment bodies, national supervisory authorities, and the confidentiality regime. At the same time, the sanctioning and penalty regime will be activated, establishing an enforcement framework for obligations imposed on providers, importers, distributors, and users of AI systems.

So, what can we expect starting from 2 August 2025?

Transparency and Accountability in GPAI Models
One of the most significant pillars of the upcoming regulatory phase is Chapter V of the AI Act, which specifically addresses GPAI models. Chapter V of the AI Act responds to the rapid development of these GPAI models, which can be adapted for a wide range of applications, from chat assistants to image analysis or content generation.

A GPAI model is a type of AI system under the AI Act, defined as an AI model, including cases where the model is trained on large amounts of data using self- AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market[1].

Unlike high-risk AI systems (such as AI systems used for facial recognition in public spaces, AI-driven decision-making in healthcare, recruitment algorithms, or creditworthiness assessments), GPAI models are not tied to a specific purpose. Therefore, regulating them poses a unique challenge.

Starting from 2 August 2025, all providers of GPAI models will be required to ensure a high degree of transparency. This obligation includes, in particular, maintaining technical documentation containing detailed information on training methods, the computational resources used, model specifications, its capabilities, risks, and potential uses. Additionally, GPAI model providers must publicly disclose clear and comprehensible summaries of the training data, including any use of content protected by copyright or intellectual property rights.

In the Czech Republic, compliance with these obligations will be supported by a so-called regulatory sandbox, which will be operated by the Czech Agency for Standardization (hereinafter as the “CAS”).[2] Within this sandbox, companies will be able to test their GPAI models with state support and verify their compliance with the requirements of the AI Act. The Ministry of Industry and Trade (hereinafter also as the “MPO”), which is the main authority responsible for implementation, is also preparing a draft of national legislation reflecting these requirements.[3]

A new requirement is also the obligation to provide instructions for the use of GPAI models, which must inform users about how the model works, its limitations, integration possibilities, and potential risks. If a GPAI model is used as a component in a high-risk AI system (e.g., one intended for use in the healthcare sector), the GPAI model provider is required to cooperate with the developer of the final AI system. This means the provider must supply technical support, documentation, and, where necessary, participate in the conformity assessment process.

Even stricter rules apply to so-called systemic-risk GPAI models as defined in Article 51 of the AI Act, particularly with regard to their performance and computational capacity, which characterize them as GPAI models with significant impact. Providers of systemic-risk GPAI models must regularly test their performance, robustness, and resistance to misuse, implement a risk management system, and report incidents. These GPAI models must also be registered with the European Commission.

On 18 June 2025, the European Commission issued the Guidelines on the scope of obligations for providers of General Purpose AI (GPAI) models under the AI Act (hereinafter as the “Guidelines”)[4]. The Guidelines provide the Commission’s interpretation of the obligations for GPAI model providers. Although they are not legally binding, they offer valuable insight into how national regulators are likely to interpret and enforce the individual obligations under the AI Act.

Notifying Authorities: A New Pillar for Conformity Assessment
As of 2 August 2025, EU Member States will be required to designate or, where applicable, accredit notifying authorities in accordance with the conditions set out in the AI Act. These notifying authorities must be technically competent, independent, and impartial. They are also subject to oversight by national accreditation bodies, and their activities will be coordinated at the European level.

The Czech Republic has designated the Czech Office for Standards, Metrology and Testing (hereinafter as the “UNMZ”) as the notifying authority responsible for registering and overseeing notified bodies that carry out conformity assessments of AI systems. However, UNMZ itself will not perform certifications. Certification will be handled by specialized organizations that meet the required conditions and are reported to the European Commission.[5]

The task of notifying authorities will be to verify whether AI systems meet the requirements set out in Annexes II and IV of the AI Act, including requirements concerning data quality, risk management, system robustness, human oversight, and cybersecurity. The outcome will be the issuance of a certificate of conformity, allowing the system to be placed on the European market. Certification may also be restricted or withdrawn if deficiencies or new circumstances emerge that increase the risk to users or affected individuals.

It is also important to note that no organization developing or selling AI systems may become a notifying authority, in order to prevent conflicts of interest. In practice, this role will therefore be fulfilled primarily by independent specialized and certification institutions.

Supervisory Authorities
Under Chapter VII of the AI Act, EU Member States will be required to appoint national supervisory authorities responsible for monitoring the application of the AI Act within their respective territories. These authorities must be independent and have adequate human and financial resources, as well as the powers to conduct investigations and inspections, request documentation, and, in extreme cases, prohibit the operation of high-risk AI systems.

Establishing a functional administrative structure is crucial to ensuring that the AI Act becomes not merely a formal regulation, but an effective tool for protecting fundamental rights, safety, and legal certainty in the digital space.

The Ministry of Industry and Trade has been appointed as the main coordinator of the implementation process. A government envoy for AI has also been appointed, and a Competence Centre for AI in Public Administration has been established.[6]

In summary, the key institutions under the AI Act in the Czech Republic will be:

  • MPO – the main coordinator of the AI Act’s implementation in the Czech Republic;
  • Czech Telecommunication Office (hereinafter as the “CTO”) – the market surveillance authority responsible for ensuring compliance with the AI Act in the field of market regulation;
  • UNMZ – the notifying authority tasked with designating and notifying conformity assessment bodies and monitoring their activities in line with the AI Act; and
  • CAS – the authority responsible for establishing and managing the regulatory sandbox.

At the EU level, a horizontal coordination body called the European Artificial Intelligence Board (EAIB) will be established. Its members will include representatives of all national supervisory authorities and the European Commission. The EAIB will be responsible for supporting the consistent application of the regulation, issuing guidance, collecting data on regulatory impacts, and facilitating information exchange between Member States.

Protection of Confidential Information
For technology companies and AI system providers in particular, Article 78 of the AI Act (Confidentiality) is of major importance. The AI Act grants supervisory authorities and other relevant institutions extensive access to documentation, source data, and internal information related to the development of AI systems. Article 78 therefore explicitly requires that all confidential information, including trade secrets, technical specifications, or the composition of training datasets, must be protected against unauthorized disclosure.

The obligation to maintain confidentiality applies not only to officials and employees of supervisory authorities, but also to members of notified bodies, expert advisers, members of the European Artificial Intelligence Board, and other involved parties. The only exceptions are cases where the disclosure of information is necessary for the exercise of powers under EU law, such as during judicial proceedings or in the context of cross-border cooperation.

In the Czech Republic, confidentiality rules will be addressed in the forthcoming national legislation being prepared by the Ministry of Industry and Trade. These measures will be incorporated into the internal regulations of supervisory authorities to ensure the protection of trade secrets and technical documentation.

Sanctions and Penalties under the AI Act
As part of the second wave of obligations under the AI Act, Articles 99 and 100 will take effect, marking the introduction of the first sanctions and administrative penalties in the field of AI regulation. Through this, the AI Act ensures that breaches of obligations will not be merely symbolic but will entail real consequences.

The highest sanctions apply to breaches of the outright prohibitions set out in Article 5 of the AI Act, namely, the use of prohibited AI practices, such as systems employing subliminal techniques, harmful manipulation and deception, exploitation of user vulnerabilities, and similar conduct. In such cases, fines may reach up to EUR 35 million or 7% of the company’s total worldwide annual turnover, whichever is higher.

For less serious violations, such as failure to comply with documentation, registration, or transparency obligations, fines may go up to EUR 15 million or 3% of annual turnover. For individuals or small and medium-sized enterprises (SMEs), sanctions may be adjusted to ensure proportionality.

Article 100 of the AI Act requires Member States to ensure that the penalty regime is effective, proportionate, and dissuasive. States may also introduce additional sanctions, including criminal penalties, if permitted under their legal systems. This creates a multi-tiered system of liability that applies not only to providers and developers, but also to importers, distributors, users, and, in certain cases, public authorities and institutions.

Conclusion
On 2 August 2025, a key part of the AI Act will enter into force, reshaping the approach to the development, use, and regulation of artificial intelligence in the EU. Companies, GPAI model providers, distributors, importers, public authorities, and notified bodies will all be required to actively comply with the new obligations. Transparency, documentation, human oversight, and risk management are no longer merely ethical aspirations, they are becoming legally enforceable standards.

At the same time, the sanctioning regime will come into effect for the first time, enabling the imposition of significant financial penalties for violations of rules and obligations as they gradually take effect under the AI Act. This means that any oversight, such as the absence of instructions for a GPAI model, failure to submit required documentation, or neglect of notification duties, may have direct legal consequences. The high fines are intended to motivate responsible behavior from the outset.

In the near future, it will be crucial to monitor secondary legislation and implementing acts that will define technical standards and complete the operational framework. For all affected entities, now is the right time to conduct internal audits, define responsibilities, and assess whether their technologies or operations fall within the regulated scope.

The next milestone in AI regulation will be 2 August 2026, when the remaining provisions of the AI Act will come into force, except for Article 6(1). Article 6(1) of the AI Act, which governs the classification of high-risk AI systems under Annex III of the Act, will only become effective on 2 August 2027.

If you have any questions regarding AI regulation or other areas of EU regulation and compliance, the team at PEYTON legal is at your disposal.


[1] Art. 3(63) of the AI Act.

[2] UNMZ. (2025). The Government Has Approved a Key Document for the Development and Security of AI in the Czech Republic. https://unmz.gov.cz/vlada-schvalila-klicovy-dokument-pro-rozvoj-i-bezpecnost-ai-v-cesku/

[3] Ministry of Industry and Trade. (2025, May 28). Another Step Towards the Development of Artificial Intelligence in the Czech Republic: The Government Has Approved the Proposal to Ensure Implementation of the AI Act. The New Coordinator Is the Ministry of Industry and Trade.  https://mpo.gov.cz/cz/rozcestnik/pro-media/tiskove-zpravy/dalsi-krok-k-rozvoji-umele-inteligence-v-cesku–vlada-schvalila-navrh-na-zajisteni-implementace-ai-aktu–novym-gestorem-je-mpo–287763/

[4] Original text can be found here: Guidelines on the scope of obligations for providers of general-purpose AI models under the AI Act | Shaping Europe’s digital future

[5] Ministry of Industry and Trade. (2025, May 28). Another Step Towards the Development of Artificial Intelligence in the Czech Republic: The Government Has Approved the Proposal to Ensure Implementation of the AI Act. The New Coordinator Is the Ministry of Industry and Trade.  https://mpo.gov.cz/cz/rozcestnik/pro-media/tiskove-zpravy/dalsi-krok-k-rozvoji-umele-inteligence-v-cesku–vlada-schvalila-navrh-na-zajisteni-implementace-ai-aktu–novym-gestorem-je-mpo–287763/

[6] Ministry of Industry and Trade. (2025, May 28). Another Step Towards the Development of Artificial Intelligence in the Czech Republic: The Government Has Approved the Proposal to Ensure Implementation of the AI Act. The New Coordinator Is the Ministry of Industry and Trade.  https://mpo.gov.cz/cz/rozcestnik/pro-media/tiskove-zpravy/dalsi-krok-k-rozvoji-umele-inteligence-v-cesku–vlada-schvalila-navrh-na-zajisteni-implementace-ai-aktu–novym-gestorem-je-mpo–287763/

 

Mgr. Jakub Málek, managing partner – malek@plegal.cz

JUDr. Tereza Pechová, junior lawyer – pechova@plegal.cz

Anna Němcová, legal assistant – nemcova@plegal.cz

 

www.peytonlegal.en

 

31. 7. 2025

Back