Of interest.

Artificial Intelligence (AI) regulation

Artificial intelligence (AI) is a topic that is currently moving the world and it is no different in the Czech Republic. Thanks to rapid technological progress, changes are coming before legislators have time to react to them.

AI Act
The first attempt to change this was the proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (the “AI Act”).

The original intention in the EU field was more of a soft-law solution, as outlined in 2020 with the publication of the White Paper on Artificial Intelligence, but eventually, a legislative approach was chosen, culminating in the publication of the draft AI Act, which saw the light of day in April 2021.

Since then, many changes have taken place, in particular, some generative systems such as ChatGPT have been made available to the general public, which necessarily led to the need to reflect the changes, which was done on 11 May 2023 with the adoption of amendments by the Internal Market Committee and the Committee on Civil Affairs.

The next step was soon to come, and in mid-June this year MEPs approved the draft AI Act, which will continue into trilogue with the European Commission and the Council of the EU. If all goes according to plan, it is expected that the final version of the AI Act could be ready by the end of this year.

As this is the first ever legal framework for artificial intelligence, considerable care must be taken in its interpretation. The stated objectives of the AI Act are, first and foremost, to ensure the safety of AI systems that will be placed on the EU market, the legal certainty that will encourage the necessary investment and innovation in AI and, of course, to improve the governance and effective enforcement of existing legislation governing fundamental rights and safety requirements.

Although the EU is currently considered to be the leader of AI regulation, it is far from being a global power regarding AI-related developments. The United States in particular, along with China, excels in this area, while the EU invests many times less in research, and if the AI Act is too restrictive, it is possible that EU Member States will not participate in the technological revolution that AI brings with it.

Definition of artificial intelligence
The AI Act primarily defines the term AI itself, which until now has been a rather vague term widely used, but without a legal basis, which could create problems in the future, especially in liability issues.

The Commission is thus introducing a definition based on the definition already used by the Organization for Economic Co-operation and Development (OECD), which is relatively technology-neutral to allow it to be used for future AI systems.

In short, artificial intelligence is defined as autonomously operating software that is capable of generating content, predictions, recommendations, or decisions that affect the environment in which the software is deployed.

Annex 1 of the proposal contains a list of technological approaches to the development of artificial intelligence, which should be supplemented by the Commission through the adoption of delegated acts as innovation progresses.

What news does AI Act bring?
Classification of systems according to the level of risk
Although most AI systems exhibit minimal risk, risk categorization and compliance with classification obligations must be carried out.

The regulation works with a total of four risk levels:
(i)     unacceptable,
(ii)    high,
(iii)   limited and
(iv)   low or minimal.

Artificial intelligence systems with unacceptable levels of risk will be banned because they are contrary to the values of the European Union.

Explicitly, the AI Act should prohibit the placing on the market or putting into service systems that manifestly endanger the life, health, safety, or rights of individuals and create unacceptable risk. Specifically, AI systems that cognitively manipulate human behaviour or use these subliminal techniques on vulnerable groups (e.g. voice-controlled toys that promote violence in children), are used by public authorities to allocate social credit or are used for emotion recognition or real-time remote biometric identification.

However, the AI Act provides an exception to the prohibition on the use of AI for biometric identification, to prosecute serious crimes (e.g. terrorism), where, subject to court approval, it can be used for retrospective biometric identification of offenders.

For systems with a high risk to the life, health and rights of individuals, AI Act proposes two different regimes depending on the function performed but also the specific purpose of the AI system.

In general, high-risk systems can be divided into 2 modes:
(i)     systems used as part of products regulated by European safety regulations (e.g. cars, toys); and
(ii)    systems used in eight specific areas, including education, critical infrastructure, employment, law enforcement and border protection.

More stringent requirements are to be placed on providers when using high-risk AI systems, including:

  • the establishment, application, documentation and maintenance of a risk management system, subject to regular updates, on the basis of which the necessary measures will be taken;
  • preparation of technical documentation and its continuous updating;
  • the transparency of the AI system used, which is achieved by, inter alia, the provision of concise, complete, factually correct instructions containing the identity and contact details of the provider, the features, and capabilities of the system including its intended purpose and level of reliability and cyber security; and
  • the need for human supervision aimed at preventing or minimising risks.

An additional obligation for the provider of a high-risk AI system should include mandatory registration before the actual placing on the market or putting into service in a publicly accessible database managed by the Commission, which is not yet available.

The last two categories of riskiness of artificial intelligence systems are without fundamental requirements.

In the case of artificial intelligence systems that recognise emotions, manipulate image, sound or video content (so-called deepfakes) or chatbots, users should be aware that they are using an artificial intelligence system and be able to decide whether to continue using it.

The residual category of low or minimum-risk systems should not be limited by any specific requirements and will be able to be placed on the EU market almost without difficulty. However, the AI Act foresees the development of codes of conduct that will encourage voluntary compliance with the requirements imposed on high-risk AI systems and possibly add some additional sub-requirements in the context of environmental sustainability.

The category of so-called foundation models
The new sector of foundation models has been added following the European Parliament’s deliberations, thereby responding to the technological developments associated with the success of ChatGPT.

The basic characteristic of foundation models is their ability to be tested on large amounts of data, their ability to process and understand diverse information, which is then reported as a general output, and their adaptability to a wide range of different tasks.

Given the need for regulation to ensure security, credibility and to promote innovation and overall competitiveness, new requirements to be imposed on providers have also been published. Providers will now have to register these models in a database, produce technical documentation and clear instructions for use.

However, when it comes to the specific foundation model known as a generative AI system, which is designed to create content such as text, images, video and audio, the range of requirements placed on providers is expanding. In order to maintain transparency, providers should be obliged to indicate that the content has been generated by the AI generative system and to implement sufficient safeguards to ensure that the content is generated in accordance with the law.

The most debated is undoubtedly the requirement to publish a summary of the copyrighted data that the generative system uses for its development and learning in order to provide the most relevant information to users. This obligation is aimed at preventing copyright infringement, where generative systems often produce content very similar to works of art, but it is not possible to prove whether the system generated the content independently or was largely inspired by another work.

Regulatory sandbox
The AI Act envisages the establishment of a regulatory sandbox, a controlled environment in which the AI system can be tested in real conditions before it is brought to market.

Providers will be able to use personal data in the test environment without meeting the conditions under Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons concerning the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (GDPR).

New institutions
Another novelty is the planned establishment of a European Council on Artificial Intelligence, composed of representatives of the Member States and the Commission, whose agenda will consist primarily of facilitating the implementation of the AI Act and harmonisation. At the national level, Member States will be obliged to set up any number of supervisory authorities to oversee the correct implementation of the AI Act.

Advantages/disadvantages and impact on daily life
Artificial Intelligence is an increasingly used technology and is reaching into most areas of everyday life. With more and more people investing in the development of artificial intelligence, we will be seeing an increasing number of artificial intelligence systems, for example, chatbots advising customers on some websites, and artificial intelligence has also found its use in personalising advertisements, medicine, automotive and many other areas.

Currently, according to the published OECD report on the impact of artificial intelligence on the labour market of its member states[1], approximately 27% of all jobs fall into the category of highly vulnerable to automation, with a higher percentage of 35% in the Czech Republic.

Among the most at-risk jobs are those requiring high qualifications and years of experience, i.e. people working in medicine, finance, law, engineering and business. Surprisingly, research shows that people working in these sectors use AI the most and rate its benefits very positively. The benefits of AI on the labour market are then linked not only to cost reduction, but also to increased productivity and employee satisfaction, as there will be a reduction in tedious and life-threatening tasks, but this will have a significant impact on the acceleration of work pace and the accumulation of stressful situations. Beyond the benefits associated with the labour market, AI offers new opportunities in public transport, energy, education and the green economy.

On the other hand, of course, AI has some negatives. As AI systems are constantly evolving and learning, they adopt undesirable patterns of behaviour and risk biased conclusions influenced by discrimination against women, people with disabilities, and ethnic or other minorities. In addition, the use of AI systems will mean a gradual increase in redundancies, invasions of individuals’ privacy that need to be minimised, and distortions of competition where only some competitors accumulate relevant information.

What is the situation here?
In May 2019, the Czech Republic, specifically the Ministry of Industry and Trade, published the approved National Strategy for Artificial Intelligence[2] as one of the first European countries. The strategy aims to promote the development and use of AI and regulate potential impacts on certain sectors. Given the overall modernisation and expansion of AI into many spheres of everyday life, a public consultation is currently underway, the findings of which are to be reflected in an updated version of the strategy, which is already in preparation.

Artificial intelligence is on the rise and will increasingly intrude into the lives of the general public. It is, therefore, crucial to adapt to this development and profit from the benefits that AI brings, rather than succumb to its negatives. The legal regulation and legal implications of the use of AI technologies in business and other activities are crucial issues.

Nowadays, the use of artificial intelligence systems leads to situations that are not addressed by the law, leaving providers, users and others in a legal vacuum that needs to be filled. Although AI regulation is still in its early days, the published text of the AI Act gives a hint of the direction the EU will take.

We at PEYTON legal will follow the developments in the field of AI regulation, especially the legislative process and the subsequent implementation of the AI Act at the national level.


[1] OECD, OECD Employment Outlook 2023, ARTIFICIAL INTELLIGENCE AND THE LABOUR MARKET. In: OECD Publishing. [online]. 11. 07. 2023 [cited 2023-28-07]. Available from: OECD Employment Outlook 2023: Artificial Intelligence and the Labour Market | READ online (oecd-ilibrary.org)

[2] Ministerstvo průmyslu a obchodu, National Strategy for Artificial Intelligence in the Czech Republic. In: mpo.cz. [online]. 06. 05. 2019. [cited 2023-28-07]. Available from: NAIS_kveten_2019.pdf (vlada.cz)


Tereza Benešová, legal assistant – benesova@plegal.cz

Mgr. Jakub Málek, managing partner – malek@plegal.cz




09. 08. 2023