The perils of unregulated and biased Artificial Intelligence (AI) have been making the headlines recently as a threat to the public’s trust and safety.

Amid the rush to create machine intelligence capable of making decisions or carry out tasks in our place, there is growing concern surrounding the unintended consequences of reliance on AI to make decisions for us – a process called Automated decision-making (ADM) in law.

AI also provides valuable opportunities, for example in terms of education, health and wellbeing or climate action.

What are the requirements for AI?

This is what the European Union (EU) chose to focus on when releasing its seven key requirements for ethical artificial intelligence on 8 April 2019. These requirements are:

  1. Human agency and oversight;
  2. Technically robust and safe;
  3. Privacy and data governance;
  4. Transparency;
  5. Diversity, non-discrimination and fairness;
  6. Societal and environmental wellbeing; and
  7. Accountability

These requirements are not intended to be an exhaustive list and the order in which they are presented mirrors the order of appearance of the related principles and rights in the EU Charter.

The European Commission’s Ethics Guidelines for Trustworthy AI have generally been welcomed as a first step to end self-regulation and to ensure that, as well as tackling the ethical questions posed by AI, the technological, legal and socio-economic aspects are also carefully addressed. The European Commission hopes that the guidelines will “put AI at the service of European citizens and econom[ies]”.The European Commission’s communication, Building Trust in Human-Centric Artificial Intelligence, gives additional colour to these key requirements.

This article reviews the EU’s proposed framework for creating trustworthy AI, all while considering the guidelines’ relevance to businesses producing machine learning tools and developing AI in the UK.

Why are we concerned by the trustworthiness of AI?

At its best, AI should be representative of (and useful to) all people. The EU’s guidelines phrased it as follows: “AI systems need to be human-centric, resting on a commitment to their use in the service of humanity and the common good, with the goal of improving human welfare and freedom”. The systems should be designed to “augment, complement and empower human cognitive, social and cultural skills”.

However, the tech-sector is increasingly faced with fearful consumers and policymakers. A recurring concern, for example, is that unmanaged ADM could result in technology which reproduces and cements bias and unfair decision-making in respect of recruitment and salaries, use or recognition of language, healthcare, diagnosis and treatment, policing and fair treatment, etc.

There have been several, well mediatised examples of this – think, Microsoft’s twitter-bot Tay in March 2016 and facial-recognition software that is significantly more accurate for Caucasian men.

There is therefore an obvious commercial interest in reassuring the market by ensuring the quality and integrity of input data and that your development process is traceable, reliable, resilient and has strategies or procedures in place to avoid creating or reinforcing unfair bias. It seems likely that the quality and trustworthiness of the AI you use and/or develop will also grow in relevance when seeking to attract and keep both financial investments and talented employees.

A framework for trustworthy AI

The guidelines propose a framework for trustworthy AI, supported by three pillars: lawfulness, ethics and robustness.

The lawfulness pillar encompasses submission to the fundamental rights contained in the EU Treaties and Charter, as well as international human rights law. AI systems should not undermine these rights but should serve to “foster and maintain” them.

Unsurprisingly, there is considerable overlap between the ethical and lawfulness principles. The ethics pillar relies on four further principles: (1) respect for human authority, (2) prevention of harm, (3) fairness, and (4) explicability – meaning that “the capabilities and purpose of the AI systems are openly communicated and decisions, to the extent possible, explained to those directly and indirectly affected”.

Robustness refers to being technically robust, resilient and “not open to malicious use”. In practice, this requires that “AI systems be developed with a preventative approach to risks and in a manner such that they reliably behave as intended […]. This should also apply to potential changes in their operating environment or the presence of other agents that may interact with the system in an adversarial manner.”

The guidelines recognise that there may be tensions between these three principles. The example given is that of “predictive policing”, foreseeing that while an AI program could help to reduce crime (prevention of harm), it may “entail surveillance activities that impinge on individual liberty and privacy” (both fundamental rights). The hope is that the overall benefits will exceed the individual risks. Protections will need to be put in place for situations where no ethically acceptable balance can be identified, for example in respect of absolute fundamental rights, such as human dignity, where no compromise is possible.

Each of the seven requirements listed at the start of this article sits under these pillars and it is intended that their implementation would contribute to the realisation of trustworthy AI.

Are the EU’s guidelines legally binding?

The guidelines are not legally binding and are not intended to constitute legal advice. However, they are intended to put AI practitioners on the right track for future regulation.

The current guidelines include a pilot assessment list, which rests upon the 7 key requirements. When finalised, the assessment list would eventually serve as a check list to give AI practitioners confidence that they are complying with non-legal standards for ethical AI.

Some examples from the pilot assessment include:

  • Who is the ‘human in control’ and what are the moments or tools for human intervention?
  • Did you verify how the system behaves in unexpected situations and environments?
  • Did you verify what harm would be caused if the AI system makes inaccurate predictions?
  • Did you establish oversight mechanisms for data collection, storage, processing and use?

This draft assessment list serves as a prelude to the European AI Alliance’s Trustworthy AI Assessment List, due to be updated and published in early 2020. Stakeholders are invited to provide feedback on the pilot Assessment List.

Although the EU is adopting a forward-thinking approach by developing these guidelines, AI and ADM are already active on the market and to a certain extent legislated for – namely at an EU level.  By way of example, elements of AI and ADM are already addressed in the General Data Protection Regulation (Regulation (EU) 2016/679), such as automated processing or profiling. Further guidance is also available via the European Data Protection Board. This legislation is relatively new and solely concerns itself with the protection of how an individual’s personal data is processed/controlled. It is not a full framework of legislation for AI but rather a fledgling of what will understandably become a distinct area of law.

Next Steps

The Commission’s communication highlights the Commission’s intention to “continue its efforts to bring the Union’s approach to the global stage and build a consensus on a human-centric AI”.

Chapter 1 of the guidelines adds that “Europe needs to define what normative vision of an AI-immersed future it wants to realise, and understand which notion of AI should be studied, developed, deployed and used in Europe to achieve this vision”. It bears remembering that the EU’s normative power is far reaching and, once the EU does legislate on the regulation of AI, all products accessing its internal market will be subject to the EU’s rules. This means that any EU legislation, regulation or industry norm will continue to be relevant to UK developers hoping to distribute or commercialise their programs in the EU.

It is also apparent that, where these principles have not been followed to date, there may be considerable costs for businesses seeking to adapt their policies and processes. Sanctions are already in place for failure to comply with GDPR.

Until legally binding guidelines are adopted, you may wish to use this time to prepare and consider the ways in which you may demonstrate your commitment to developing trustworthy programs and AI.

Our specialised Technology, Privacy and Data Protection team are able to draft appropriate policies to help demonstrate your commitment to trustworthy AI, such as privacy policies, data processing and sharing policies or policies on the use of open source software. We can also assist with general commercial matters such as reviewing contracts or drafting commercial terms and conditions which commit your trading partners to matching your high standards.