On 19 November 2025, the European Commission (the Commission) presented the first official draft of the legislative package Digital Omnibus[1], the main objective of which is to modernise and streamline the field of European digital law. Together with the core Digital Omnibus package, a separate proposal entitled Digital Omnibus on AI[2] was also published.
If you are interested in the core Digital Omnibus legislative package, we refer you to our previous articles here: Digital Omnibus: proposed changes to the GDPR in the EU‘s digital era and Digital Omnibus: proposed changes to the Data Act in the EU’s digital era
The Digital Omnibus on AI is a targeted amending regulation aimed at ensuring that Regulation (EU) 2024/1689[3] (AI Act) can be implemented smoothly, uniformly and at a pace corresponding to Europe’s actual technical and institutional readiness.
In EU law, the term “omnibus” denotes a legislative instrument that simultaneously amends several existing acts in order to align them and eliminate inconsistencies. The Digital Omnibus on AI thus in practice functions as a corrective and harmonising instrument of the rapidly expanding body of EU digital law. Its ambition is to ensure greater coherence and internal consistency between the AI Act and other key instruments, such as the GDPR[4], the Data Act[5], the Cyber Resilience Act[6] and others, as well as sector-specific safety legislation, thereby reducing regulatory fragmentation and legal uncertainty in the EU internal market. This is not deregulation, but practical optimisation.
The publication of the Digital Omnibus on AI was preceded by months of consultations with representatives of the business sector, small and medium-sized enterprises, the public and the EU Member States. These discussions showed that, despite broad support for the objectives of the AI Act, its implementation framework gives rise to significant uncertainty. This uncertainty stems in particular from delays in the designation of national authorities, gaps in harmonised standards and the complex interlinkages between the AI Act and other EU legislation in the digital economy. The Digital Omnibus on AI directly addresses these problems.
What, then, does the Digital Omnibus on AI specifically contain?
At its core, the proposal pursues several interrelated objectives: harmonisation of key deadlines and timetables, streamlining and clarification of overlapping obligations, removal of unnecessary administrative burdens for businesses and supervisory authorities, and clarification of the relationships between the AI Act and other EU legal acts. At the same time, it strengthens enforcement mechanisms where this is necessary given the nature of AI systems, in particular through centralised supervision at EU level.
Amendment of deadlines, dates of application and transitional provisions
One of the most important interventions of the Digital Omnibus on AI is the proposed adjustment of the dates of application of the wave of obligations for high-risk AI systems. The proposal introduces a conditional mechanism under which the start of application of these obligations is explicitly linked to the availability of key compliance support tools, in particular harmonised standards, common specifications and Commission guidelines.
The purpose of this change is to respond to the uncertainty caused by their delayed adoption and to provide undertakings with a realistic and predictable time frame to prepare for the new regulatory requirements. Once the Commission formally confirms that these tools are available, the provisions of the AI Act governing high-risk systems will apply in two phases.
For so-called “stand-alone” high-risk AI systems listed in Annex III to the AI Act – for example, creditworthiness assessment systems – the period will be six months. For high-risk systems regulated by specific sectoral legislation listed in Annex I to the AI Act, typically medical devices, the applicability will be deferred by twelve months.
At the same time, the proposal introduces fixed “backstop” dates to prevent further unlimited postponement of the obligations. If the Commission does not formally confirm the availability of standards, specifications and guidelines, the obligations for high-risk systems will apply at the latest:
- from 2 December 2027 for systems listed in Annex III to the AI Act;
- from 2 August 2028 for systems listed in Annex I to the AI Act
The obligations for high-risk AI systems include in particular detailed requirements relating to data governance, transparency, technical documentation, human oversight and robustness of AI systems. In line with the new timing, the transitional periods for AI systems already placed on the market or put into service are also technically adjusted so as to correspond to the new dates of applicability of these obligations and to avoid uncertainty as to which AI systems the new regimes apply.
Transitional period for labelling outputs of generative AI
The AI Act imposes on providers of generative AI systems an obligation to label artificially generated or manipulated content, such as synthetic audio, video or text. The Digital Omnibus on AI proposes to defer the start of application of these obligations until 2 February 2027 for AI systems that were placed on the market before 2 August 2026.
The purpose of this deferral is to give providers of generative AI models based on general-purpose AI models, i.e. General Purpose AI (GPAI models), adequate time to adapt their solutions technically and organisationally to the new obligations without disrupting the market. At the same time, the proposal reflects the fact that a Code of Practice (GPAI Code of Practice) is still being prepared, which is intended to serve as practical guidance for the application of transparency and labelling obligations for generative AI outputs under the AI Act.
Special legal basis for the use of sensitive data in bias testing
The Digital Omnibus on AI introduces a new Article 4a of the AI Act, which creates an explicit legal basis for the exceptional processing of special categories of personal data in the development, testing and fine-tuning of AI systems. This allows providers and deployers of AI systems to process sensitive personal data for the purposes of detecting, measuring and correcting systematic bias, in situations where there is no other technically or methodologically comparable alternative for achieving this objective.
This authorisation is, however, subject to strict substantive and procedural safeguards. The processing must be necessary and proportionate to the pursued purpose, limited to the minimum data required, and subject to appropriate technical and organisational measures, including access restrictions, documentation of necessity and an obligation to erase the data once the purpose of the processing has been achieved.
By this change, the proposal responds to a long-standing practical problem: without working with sensitive demographic characteristics, it is in many cases factually impossible to reliably identify and correct bias. The new Article 4a of the AI Act thus provides a legally clearer and more predictable framework for the responsible development of fair AI systems, without weakening the standard of privacy protection under the GDPR.
Strengthened role of the European AI Office
The Digital Omnibus on AI proposes to significantly strengthen the position of the European AI Office and to establish it as a central supervisory authority for selected categories of AI systems.
The European AI Office (AI Office) will directly supervise in particular:
- AI systems based on GPAI models, where both the model itself and the downstream AI system originate from the same provider;
- AI systems integrated into very large online platforms or very large online search engines.
This is accompanied by a significant expansion of the powers of the AI Office, which should include, for example, the power to require documentation from obliged entities under the AI Act, to inspect and assess data sets, to carry out inspections, to supervise real-world testing, to assess related risks and to impose sanctions within the limits laid down by the AI Act. The AI Office would also be expressly empowered to carry out ex ante conformity assessments of AI systems before they are placed on the market.
At the same time, the proposal provides that the AI Office will not supervise AI systems embedded in products regulated by sectoral legislation listed in Annex I to the AI Act (e.g. medical devices, machinery or aviation systems). Supervision of these systems remains within the competence of the relevant sectoral supervisory authorities under specific legislation.
Revised requirements on AI literacy
The current wording of the AI Act imposes on providers and deployers of AI systems an obligation to ensure a sufficient level of AI literacy of persons involved in their operation and use. These obligations have long been criticised as vague and administratively burdensome, especially for smaller undertakings.
The Digital Omnibus on AI therefore proposes to shift the main responsibility for promoting AI literacy from deployers to the Commission and the Member States. The new Article 4 of the AI Act would thus require the Commission and the Member States to actively encourage providers and deployers of AI to ensure a sufficient level of AI literacy through non-binding instruments such as training, information resources and the exchange of best practices.
This change would not, however, affect other explicit training obligations under the AI Act, for example for deployers of high-risk AI systems.
Simplification of registration and technical documentation
The Digital Omnibus on AI narrows certain registration obligations, in particular for AI systems in respect of which the provider has concluded that they do not fall within the category of high-risk systems under Article 6(3) of the AI Act because they do not pose a significant risk to the health, safety or fundamental rights of individuals. In such cases, providers will be required only to document their exemption assessment internally, and not to register or enter these systems in the EU database. This is intended to significantly reduce the administrative burden.
At the same time, the proposal extends certain regulatory reliefs that so far applied only to small and medium-sized enterprises (SMEs) also to so-called small mid-cap undertakings (SMCs). SMCs will thus newly benefit from a simplified technical documentation regime, relief in the area of quality management systems (QMS) and a more lenient regime for the calculation of administrative fines. The simplified QMS regime, which so far applied only to micro-enterprises, is at the same time extended also to SMEs.
The aim of these changes is to reduce the disproportionate administrative burden on undertakings which, although they do not fall within the classical definition of SMEs, are in a similarly vulnerable position in terms of compliance capacity and cost structure.
The proposal further introduces the principle of “single application” and “single assessment” for entities acting as notified bodies (conformity assessment bodies) both under the AI Act and under the sectoral legislation listed in Annex I to the AI Act.
Instead of parallel and duplicate notification procedures, it will thus be possible to submit a single application and undergo a single assessment process, which should simplify administration, shorten the duration of notification procedures and reduce the regulatory burden both for the notified bodies themselves and for manufacturers of regulated products.
Regulatory sandboxes and real-world testing
The proposal expands the possibilities for the use of regulatory sandboxes and real-world testing as instruments to link AI supervision with support for innovation. It provides for the establishment of a pan-European regulatory sandbox operated by the AI Office, in particular for AI systems based on GPAI models developed by the same provider.
Alongside national sandboxes, a two-tier structure is thus to emerge, in which from 2028 the EU sandbox should enable cross-border experimentation, especially in sectors where full conformity assessment would otherwise significantly hamper innovation, such as transport, medical technology, energy or advanced manufacturing.
At the same time, the possibilities for real-world testing outside sandboxes are also expanded. Providers of high-risk AI systems, including those embedded in products regulated by sectoral legislation listed in Annex I to the AI Act, will be able to test their systems under controlled conditions and under supervision. Member States may conclude voluntary coordination agreements with the Commission for these projects. The aim is to accelerate the development of safe and legally compliant AI without the need for premature mass deployment.
Conclusion
The Digital Omnibus on AI represents a conceptual revision of the way in which the AI Act is to be implemented and enforced in practice. Although the proposal is formally framed as a technical and implementation “fine-tuning” of already adopted regulation, its adoption will have tangible impacts both on providers and deployers of AI systems and on supervisory authorities. It moves towards greater uniformity, technical feasibility and a reduction of administrative burdens, without weakening the protective purpose of the AI Act itself.
At the same time, this is not merely a neutral simplification of the rules. The Digital Omnibus on AI shifts the balance between support for innovation and the intensity of regulatory supervision: on the one hand, it extends deadlines, expands sandboxes and relaxes certain formal obligations; on the other hand, it strengthens centralised supervision by the European AI Office, in particular with regard to general-purpose AI models (GPAI) and very large online platforms.
For undertakings operating in the EU, this means a dual challenge. On the one hand, greater legal certainty, more predictable timetables for obligations and a reduction of part of the compliance burden can be expected, especially with regard to high-risk AI systems, registration obligations, post-market monitoring and AI literacy requirements. On the other hand, the proposal also opens up new strategic questions, for example concerning preparedness for supervision by the AI Office, participation in regulatory sandboxes, the setting of governance models for GPAI and the overall management of risks associated with the operation of AI.
Given that the Digital Omnibus on AI is still at the stage of a legislative proposal and will be further discussed by the European Parliament and the Council, further amendments to its wording can be expected. For organisations that are already implementing the AI Act or are only now assessing its impacts, it will be crucial to continuously monitor the development of the proposal and to adapt internal processes, documentation and technical solutions in a timely manner so as to be prepared for the new regulatory framework well in advance.
Should you have any questions regarding the impacts of the Digital Omnibus on AI, the AI Act, the Digital Omnibus or other areas of EU regulation and compliance, we at PEYTON legal are at your disposal.
[1] Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL amending Regulations (EU) 2016/679, (EU) 2018/1724, (EU) 2018/1725, (EU) 2023/2854 and Directives 2002/58/EC, (EU) 2022/2555 and (EU) 2022/2557 as regards the simplification of the digital legislative framework, and repealing Regulations (EU) 2018/1807, (EU) 2019/1150, (EU) 2022/868, and Directive (EU) 2019/1024 (Digital Omnibus).
[2] Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL amending Regulations (EU) 2024/1689 and (EU) 2018/1139 as regards the simplification of the implementation of harmonised rules on artificial intelligence (Digital Omnibus on AI)
[3] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)
[4] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation – GDPR)
[5] Regulation (EU) 2023/2854 of the European Parliament and of the Council of 13 December 2023 on harmonised rules on fair access to and use of data (Data Act)
[6] Regulation (EU) 2024/2847 of the European Parliament and of the Council of 23 October 2024 on horizontal cybersecurity requirements for products with digital elements and amending Regulations (EU) No 168/2013 and (EU) 2019/1020 and Directive (EU) 2020/1828 (Cyber Resilience Act)
Mgr. Jakub Málek, managing partner – malek@plegal.cz
JUDr. Tereza Pechová, junior lawyer – pechova@plegal.cz
22. 1. 2026