Of interest.

Europe on the road to a uniform legal framework for artificial intelligence

The current coronavirus situation has naturally made us think, among other things, about whether, as a society, we are on the verge of a rapid rise in the use of artificial intelligence (AI), especially in terms of its use at work. The events of recent months have shown that, thanks to the established crisis measures in connection with the spread of coronavirus, AI has very quickly integrated into our daily lives. It is obvious that its use on a daily basis brings very significant advantages, especially in connection with increasing work efficiency. On the other hand, the involvement of AI in the lives of each of us brings number of risks and many not yet fully answered questions from a legal point of view which we will try to answer in this article.

White Paper on Artificial Intelligence

Although the use of AI has spread rapidly, especially during the coronavirus pandemic, the rise in its use has been a widely discussed topic in the Czech Republic and abroad for several years. These discussions have previously raised several important questions regarding AI and its involvement in private and public life. A basis for the answers can be found in the so-called White Paper on Artificial Intelligence (full title White Paper on Artificial Intelligence – A European Approach to Excellence and Trust) published by the European Commission earlier this year (the “White Paper“).[1] The White Paper states that a coherent European approach to AI matters is needed, building on the European Strategy for Artificial Intelligence presented in April 2018.[2]

Although the White Paper is a broad document and so it does not provide specific answers to specific questions about the risks associated with the involvement of AI in everyday life, it provides a solid framework for possible solutions to these risks. The White Paper presents policy options that will enable the credible and, above all, safe development of artificial intelligence in Europe. The White Paper emphasises that at the core is a policy framework aimed at cooperation between the private and public sectors and the creation of the right incentives to adopt solutions to the AI issues more quickly.

In general, the White Paper summarises the challenges faced by the European Union (EU) in relation to AI, including the need to ensure access to the necessary data, and identifies five possible approaches:

  1. voluntary designations (e.g. official ethical or credible AI certifications);
  2. sectoral requirements for public administration and facial recognition (mentioning the surprising possibility of a blanket ban on the use of these systems);
  3. mandatory requirements for application in high-risk sectors (e.g. healthcare or transport);
  4. revised safety and liability legislation; and
  5. a system of strict supervision of state administration.

The White Paper generally works with the concept of risk in the sector in which AI is to be implemented and, depending on the degree of such risks, appropriate measures for its regulation should be introduced in light of a risk-based approach.

At the same time, the White Paper sets out ways that can enable credible and secure development of AI in Europe, while fully respecting the values and rights of the EU citizens.

According to the White Paper, the development of AI requires that legislation introducing its regulation should focus on the following key aspects to cover all safety and liability risks:

  1. human action and supervision;
  2. technical adequacy and safety;
  3. privacy and data protection;
  4. transparency;
  5. diversity, anti-discrimination and justice;
  6. social and environmental well-being; and
  7. responsibility

The White Paper is now in the public consultation process, which is open until 14 June 2020.

Definition of AI

Although AI is referred to in all possible ways, the crucial question remains how to define it.

According to a document issued by the High-Level Expert Group on Artificial Intelligence (AI HLEG) entitled A definition of AI: Main capabilities and scientific disciplines, the definition of AI is as follows: “Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions.”

The White Paper then comes with its own simplified definition of AI, according to which AI is “a set of technologies that combine data, algorithms and computational power,” and with which we will continue to work in our article.

Risks associated with the use of artificial intelligence

While AI can bring many advantages, such as greater product safety and accuracy of procedures, its use can also give rise to damages. This damage can be both physical and intangible and can be associated with several risks.

The main risks associated with the use of AI relate to the application of rules aimed at protecting fundamental human rights, as well as security and liability issues. A fundamental change in the functioning of human society in this respect is that AI can now hold several functions (i.e. professions) that until now could only be performed by humans. As a result, individuals may increasingly encounter measures and decisions taken by AI systems, or with their assistance, which can sometimes be difficult to understand and which, if necessary, may be difficult to get legal defence against. It should be noted that existing product safety legislation already supports an extended safety concept that protects against all types of risks arising from a product according to its use.

We therefore believe that, in order to increase legal certainty for individuals, it would be appropriate to introduce regulation that would explicitly cover new risks associated specifically with AI. The specific features of many AI technologies, including opacity (the so-called black box effect), complexity, unpredictability, and partial autonomous behaviour, can make it more difficult to verify its compliance with law.

Responsibility for the use of AI and current practice

When AI technologies are part of products and services, they can pose new security risks for its users. For example, due to a flaw in object recognition technology, an autonomous car may incorrectly identify an object on the road and cause an accident that results in injury and material as well as non-material damage. The European Commission initially favoured covering the legal framework for liability through the (Product) Liability Directive (Directive 85/374 / EEC), but now appears to be embarking on a new stand-alone liability regime, due to the specific features of AI which change the characteristics of not only many products but also services.

As has already been said, the White Paper in this regard is only a general framework for possible future regulation, but it puts people at the centre of supervision and setting up of AI. The White Paper is accompanied by the so-called “Report on the Impact of Artificial Intelligence, the Internet of Things and Robotics on Safety and Accountability”[3], which addresses, among other things, the issue of accountability, and the guidelines contained in this report will certainly be subject of many further discussions.

We believe that now that the framework for the legal regulation of AI liability is only general, existing legislation needs to be used to address the issue of liability for any AI damages. The national regime, which can be applied to liability in connection with AI under the Czech legal system, can be found in Act No. 89/2012 Coll., the Czech Civil Code, as amended (hereinafter the “Czech Civil Code”).

If we take into account the approach of the White Paper, which puts a human being at the centre of events within AI, because each AI technology is set by a person and when using AI there is still a certain need for human interaction in the form of supervision, the provisions of § 2937 of the Czech Civil Code may be applied to liability, if a thing causes the damage by itself, the damage would be compensated by the person who was supposed to supervise the thing; if such a person cannot be identified, the owner of the thing shall be the accountable. Anyone who can then prove that he has not neglected proper supervision will be released from the obligation to pay damages.

Provided that there is full autonomy of AI, then the provisions of § 2939 of the Czech Civil Code can be applied to liability, according to which the damage caused by a defect of movable property intended to be placed on the market as a product for sale, lease or other use would be compensated by the person who manufactured or otherwise obtained the product or its component(s), jointly and severally together with a person who marked the product or its part with his name, trademark or otherwise.

It is clear that implementation of AI into everyday life is risky, therefore the issue of liability for AI is likely to be the most pressing issue. Liability for human lives, whether in the case of accidents caused by autonomous vehicles or liability for AI in the provision of healthcare, may certainly mean the obligation to pay high compensation. This can mean a bankruptcy for some companies, and therefore it will be necessary to consider the possibility of introducing an insurance system in the future regarding the use of AI.

Conclusion

The legal framework for AI still has a whole range of open questions, depending on the degree of risk associated with the use of AI. AI legislation, or the incorporation of AI regulation into the existing legislation, is and will need to be approached holistically, as the use of AI affects more areas of law, including in particular the institution of liability, privacy and personal data protection, human rights and cybersecurity. The legal regulation of liability is likely to depend on the degree of human activity and supervision of AI.

In addition, it should be emphasized that, despite all the risks mentioned above, AI is a strategic technology that offers many benefits to citizens, businesses, and public as a whole. The world can see even more of its benefits after the recent situation caused by the coronavirus epidemic, when a significant part of life has moved to the virtual world. Many employers (entrepreneurs) using AI can thus see that AI offers a significant increase in work efficiency and productivity and can also help to find solutions to some of the problems associated with sustainability and demographic change.

 

JUDr. Jáchym Stolička, junior lawyer – stolicka@plegal.cz

Tereza Pšenčíková, LL.B., LL.M., junior lawyer – psencikova@plegal.cz

Mgr. Jakub Málek, partner – malek@plegal.cz

 

www.peytonlegal.cz

 

02. 06. 2020

 

[1] White Paper on Artificial Intelligence – A European Approach to Excellence and Trust, available at: https://op.europa.eu/cs/publication-detail/-/publication/aace9398-594d-11ea-8b81-01aa75ed71a1

[2] Artificial Inteligence for Europe, available at: https://ec.europa.eu/transparency/regdoc/rep/1/2018/CS/COM-2018-237-F1-CS-MAIN-PART-1.PDF

[3] REPORT FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL AND THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE, Report on the Security and Responsibility of Artificial Intelligence, the Internet of Things and Robotics of 19 February 2020, available at: https://op.europa.eu/en/publication-detail/-/publication/4ce205b8-53d2-11ea-aece-01aa75ed71a1/language-cs

 

Back