EU White Paper on AI

Described by the Commission as “the first pillar of [its] new digital strategy”, the EU White Paper on AI published in February 2020 has divided critics.

The White Paper outlines the Commission’s proposals to create an environment that promotes the development of artificial intelligence (AI) in Europe, whilst respecting fundamental human rights and freedoms. This means prioritising human-centric development of AI – AI developed by the humans for the humans.

Despite general concerns that the EU is overly consumer-focused in its approach to AI, this White Paper is markedly industry-orientated. It sets out the EU’s strategy in respect of investment, SMEs, upskilling employees and catching the next data wave.

What does the EU White Paper on AI cover?

Most Member States are relying on an imperfect tapestry of existing consumer rights, product safety, liability and data protection legislation to create a framework within which businesses and individuals can enforce their rights in respect of AI-based products and services.

Conscious of the need for a bespoke regulatory framework, several governments have started to propose their own regulatory schemes (the White Paper gives the examples of Germany, Denmark and Malta). Most Member States have adopted their own policies to promote investment, research and development. It is with this backdrop that the EU’s intervention finds purpose.

Pan-European legislative action will avoid fragmented and inconsistent laws across Europe – it should also reduce the threats of self-limitation and isolation, associated by critics with the implementation of pioneering legislation.

However, the question is whether the Commission’s proposals can result in legislation that is supportive of business, effective in protecting consumers and sustainable in the rapidly evolving fields of machine learning and artificial intelligence. Doing so is a strategic priority and the EU has identified the use and development of AI as being in the public interest. It is deemed essential in order to achieve the EU’s Sustainable Development Goals, but also for providing better services to citizens and business development opportunities in the EU’s key sectors.

It has also been identified that trust is pivotal to the early adoption and sustainable development of AI tools.

In April 2019 the High-Level Expert Group on Artificial Intelligence (HLEG AI), an independent group set up by the European Commission, published its Guidelines on Ethical AI. We have commented on this here. This was followed by calls for clarity from consumer organisations and expressions of concerns regarding consumer protection were brought to the forefront of this debate. The White Paper builds on these previous endeavours, setting the objective of creating an ecosystem of trust and of excellence to meet its “twin objectives” of promoting the uptake of AI and addressing the associated risks.

Since the White Paper was published, the HLEG AI has also produced the self-assessment checklist for trustworthy AI (ALTAI) which it promised in 2019. This is an important first step in ending self-regulation, however it remains a flexible and informal tool. The requirements set out in previous guidelines and in this self-assessment list do not have legal authority and do not bind organisations. They are an internal tool which organisations can use to benchmark their practices, to understand and minimise risks. Their efficiency will depend on uptake and the EU’s next steps.

A report has also been published summarising the results of the consultation on the White Paper, carried out from 19 February to 14 June 2020. The consultation received over 1200 responses from EU citizens (31%), companies and business associations (29%), as well as research institutions (13%) and NGOs (11%). We will draw on the findings of the report where relevant to explore the risks and opportunities identified in the White Paper in this article.

The two purposes of this article are to review the EU’s proposals and to help you to consider what the proposed legislative framework will mean for businesses operating in the EU (including UK businesses post-Brexit).

How does the EU White Paper define AI?          

The Commission’s White paper defines artificial intelligence as “a collection of technologies that combine data, algorithms and computing power”. This is a simple and broad definition, which we will use for the purposes of this article.

It is worth noting that the White Paper recognises the limitations of this definition and the Commission has highlighted the difficulty of defining AI in such a manner that the legislation will not quickly become outdated by technological progress.

Some have pointed out, quite rightly, that we need to consider what we are regulating and why. This is a helpful reminder that when it comes to AI the full scope of the subject matter is not yet known and potentially wider than we currently imagine.

The Twin Objectives in the EU White Paper on AI

The White Paper contains policy and regulatory propositions, divided into two pillars: the ecosystem of excellence and the ecosystem of trust.

The ecosystem of trust contains the regulatory proposals, which we have reviewed in more detail than the policy proposals.

Ecosystem of Excellence

This pillar sets out the Commission’s policy framework for AI. The following aims are highlighted in the report:

  • Exploiting existing strengths in the EU market in collaboration with Member States;
  • Enhancing research and development by increasing public and private investment. It is noted that “investment in research and innovation in Europe is still a fraction […] of the investment in other regions in the world” – equivalent approximately to a quarter of the value of investment in the North American market and half of that in the Asian markets;
  • Improving access to and management of data in a way that builds trust and ensures re-usability of data;
  • Upskilling the workforce and SMEs to adopt and use AI-based products and services – 90% of respondents to the Commission’s consultation considered this to be a key action in progressing the EU’s excellence in the AI-market place; and,
  • Prioritising international cooperation based on EU rules and values, namely in bilateral and multilateral (World Trade Organisation) negotiations.

Ecosystem of Trust

The White Paper identifies lack of transparency, traceability and human oversight as key weaknesses in current AI practices. These weaknesses purportedly prevent both businesses and consumers from trusting AI-based products and services.

A staggering 90% of respondents to the Commission’s consultation on the White Paper responded that they feared the possibility of AI breaching fundamental human rights. 87% of respondents feared discriminatory outcomes of automated decision making.

Changes to the existing legislative framework

To build trust, the Commission sets out five recommendations to improve the existing legislative framework:

  1. Ensuring the proper application and enforcement of EU and national legislation;
  2. Expanding the scope of product liability legislation to include services based on AI technology;
  3. Expanding the scope of product safety legislation in order to better address any risks posed by system or software updates;
  4. Clarifying the allocation of responsibility between different members of the supply chain; and,
  5. Addressing the safety risks posed by the introduction of AI software, whether via updates or self-learning when the product is being used.

These proposals appear to be logical next steps but there is significant industry concern that the framework remains proportionate to avoid restricting development and becoming overly burdensome on businesses. This was considered further by the Commission in their report to the European Parliament on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics.

The foundations of a new regulatory framework for AI

In addition to the adjustments described above, a new regulatory framework for AI is recommended by the Commission. The framework would apply to “high risk” applications. The Commission’s thought process in this respect can be summarised as follows:

high risk sector + high risk decision = high risk AI

If an application is identified as being “high risk”, then the mandatory legal requirements of the new EU framework would apply (discussed in more detail below).

This binary view of the regulation of AI (an all or nothing approach to the mandatory legal requirements) has been heavily criticised. In particular, it seems impossible to determine objectively the risk level from which additional regulation is required.

The White Paper suggests that applications which do not qualify as “high risk” could still be labelled as conforming with any framework that is put in place, subject to satisfying the relevant criteria, as a way of reassuring their customers. This would attract the same responsibilities as if the mandatory legal requirements applied to the application.

Mandatory legal requirements

The proposed framework would contain mandatory legal requirements to ensure the trustworthiness of high-risk products and services. Below are some of the examples given by the Commission in the White Paper:

Risk Area Example Regulation
Training data Ensuring that data sets are sufficiently representative
Data and record-keeping Maintaining documentation on the programming and training methodologies used
Information to be provided to end-users and the supply chain Providing information in respect of the purpose for which the systems are intended or their level of accuracy
Robustness and accuracy Ensuring that all outcomes are reproducible
Human oversight (lack of) Automated decisions being validated by humans
Biometric identification AI can only be used for remote biometric identification where such use is duly justified, proportionate and subject to adequate safeguards

 Prior conformity assessment

The White Paper also indicated that mandatory conformity assessment could be required prior to products being placed on the internal market.

These assessments would be similar to existing conformity mechanisms and ongoing conformity assessments could also be required. Proposals for conformity controls before and after the product is placed on the market were supported by 62% of the respondents to the Commission’s consultation on the White Paper, but critics cite this as being part of the bureaucratic red-tape that may hold back research and development in the EU.

No structure for these assessments has been suggested at this stage, though it is easy to imagine that the ALTAI self-assessment could form the basis of future tools in this respect.

What does this mean for business?

Unfortunately, we still don’t actually have a regulatory framework, or concrete proposals in respect of the amendments to existing legislation. This means we have not made any progress. However, the developments we have seen do give us more than an inkling of the direction in which the Commission is moving.

It appears to us that the Guidelines published by the High-Level Expert Group in April 2019 remain central to the Commission’s strategy and are perhaps the most useful tool for businesses looking to future-proof their relationship with data and the development of AI-based products or services. The 2019 Guidelines should be read in conjunction with the ALTAI self-assessment tool in order to prepare for upcoming changes.

Nonetheless, stakeholders are not in agreement over the appropriate course of action and the legislative process in the European Union is notoriously slow.

It is likely to be several years before bespoke legislation in respect of artificial intelligence is enacted. The European Data Protection Supervisor’s Opinion dated 29 June 2020 commends the EU-wide approach, but wishes to see more concrete, and perhaps more robust, action. Certain lobbies and critics on the other hand have been quick to raise concerns that additional regulation will strangle start-ups, reduce the competitiveness of EU-based businesses and make the internal market less attractive. In practice, it is probably too early to comment on this.

The outcome is likely to depend on:

  1. Whether the assistance promised to SMEs and research institutions manifests itself in the policy actions taken by the EU; and,
  2. What the proposed regulations actually contain.

In the meantime, the existing legal framework continues to apply and Member States may create their own additional regulations. The United Kingdom’s transition period is set to come to an end on 31 December 2020, leaving the country in limbo: left to consider/develop independent policies and regulatory measures, but required to comply with the EU’s future regulatory framework to access its internal market.

By way of conclusion, it is worth remembering that the EU is not the only power seeking to regulate the creation and use of artificial intelligence to their advantage. For example, the United States have identified that “continued American leadership in AI is of paramount importance to maintaining the economic and national security of the United States and to shaping the global evolution of AI in a manner consistent with [its] Nation’s values, policies, and priorities.” Draft policy and regulation proposals were prepared in January 2020, and a review of the American AI Initiative was published in February 2020.

UK businesses should keep a close eye on the action being taken in the key markets they operate in as additional restrictions are likely to apply.