Abstract background image of a flowchart diagram on a white floor. Computing algorithm concept

On 23 October 2019, the European Consumer Organisation BEUC (Bureau Européen des Union de Consommateurs), published a report titled AI Rights for Consumers.

They BEUC consider that the use of AI is becoming a defining feature of our markets and societies and, as a consequence, consumer rights should evolve to hold companies responsible when things go wrong (as they currently do in other sectors). 

This paper builds upon the European Commission’s High Level Expert Group on Artificial Intelligence’s “Ethics Guidelines for Trustworthy AI”, which we have commented upon in this previous article. It is BEUC’s opinion that the Commission’s report does not go far enough to give consumers confidence in emerging technologies. However, both documents are driven by the same issue which we shall look at in more detail in this article. 

It is often the case that legislation is a step behind the market, so the absence of tailored consumer protection legislation in respect of artificial intelligence (AI) to date is not surprising. Nonetheless, given the potential risks (and benefits) associated with AI, together with its rapid development and deployment, it seems prudent to have a framework of general legal principle on the development and commercial exploitation of AI as soon as possible. This body of legislation will inevitably be required to adapt and grow over time. Technology and digital policy remain a priority for the UK and the European Union; we are therefore likely to see developments in this area. 

If the proposals in BEUC’s report were progressed, resulting legislation would join an existing corpus of consumer rights legislation which has gained in importance with the increased reliance on online transactions. This article considers the desirability of legislation, the measures proposed by BEUC and how/if they add value to the existing legal landscape. 

What is Artificial Intelligence?

When we use the words “Artificial Intelligence” or “AI” in this article, we mean making software (and machines), based on large quantities of data, that mimic and/or exceed human capabilities – the “continual endeavour to push forward the frontier of machine intelligence”. We are incorporating machine learning as a key component to the development of such software into this definition. 

What is algorithmic decision making?

The BEUC’s report focuses on algorithmic decision systems (ADS) and therefore we do too for the purposes of this article. These are systems used to support or replace human decision-making. These systems often rely on the analysis of large amounts of personal data, looking for patterns and information identified as “useful” in order to make a decision. 

There are many stakeholders in the development of ADS, which provide opportunities to exploit and risks to manage. These systems may have considerable impact on consumers’ lives, as they gain traction in the administration of justice, banking, preventative medical care or treatment, insurance, etc. 

Who are consumers?

When we talk about “consumers”, we mean an individual acing for purposes which are wholly or mainly outside of their trade, business, craft or profession.

Desirability and necessity of consumer right protection in respect of AI

An obvious and understandable concern from a research & development perspective is that increased regulation will over-burden this fledgling area of technology. If the legislative burden on businesses is too heavy, it will not be possible for a compliant industry to develop AI tools to their full potential. The result could drive black-market and illegal operation of AI. 

At the time of writing AI is not fully autonomous – if it were, their regulation would of course be of primary concern.  Nonetheless, the other side of the coin is attractive to the legislature and government – the creation a legal framework in which business can build ethical and consumer-focused AI, circumventing the risk that AI should continue to develop without regulation until we reach the point of no return. When balancing public and economic interest, the question of “when” would we consider AI is sufficiently developed to legislate arises. The two interests rarely align perfectly and it is natural that public interests should prevail. The argument is circular, because technological progress and a buoyant economy are also in the public interest.  

The media also focus on discrimination by AI, where ADS make a decision based solely on our data-self. Obvious as it may seem, ADS may differentiate (read: discriminate) based on information that is not currently a protected characteristic. As consumers become increasingly aware of the pervasiveness of data collection, fears are growing around consumer tracking, profiling and the decisions made in their regard. 

BEUC have identified the following rights as being necessary in respect of ADS in order to protect human autonomy and dignity whilst ensuring a high level of consumer protection: 

  1. Transparency, Explanation and Objection; 
  2. Accountability and Control;
  3. Fairness;
  4. Non-discrimination;
  5. Safety and Security;
  6. Access to Justice; and
  7. Reliability and Robustness.

The BEUC considers that, without these rights, consumer trust in the underlying technologies cannot be achieved. The higher the potentially adverse impact of the ADS, the stronger the appropriate legislative response should be. 

What is BEUC proposing and why is it relevant to industry?

The list of recommendations above is remarkable because of its similarity to the list published by the Commission earlier this year. 

Whereas the Commission’s Guidelines (“Guidelines”) focused on the checks businesses should adopt internally, BEUC’s report focuses on industry’s external expression of these attitudes. 

For example, BEUC proposes that businesses have technical and organisational measures in place to ensure that they are accountable for decisions based on ADS and that there should be a supervisory authority with competence to impose certification and transparency measures. The proposals go so far as to suggest pre-approval before market deployment of ADS that will carry the highest levels of risk. 

This means that whatever regulation is imposed, there may be a significant increase in the responsibility placed on organisations using ADS in respect of consumers. 

BEUC’s recommendations also consider the functionality of AI over the course of the product’s lifetime. In the same way as consumers have come to expect that their goods will be safe, secure and functional for the lifetime of the product, they should be able to rely on the same assumption in respect of digital content products. They should be reassured that regulatory oversight is in place and that businesses have minimised risks when developing the AI/ADS in question. Consumers should have a statutory right to judicial redress where this is not the case.

BEUC proposes that businesses should be required to deliver updates throughout the lifecycle of a product to protect functionality and limit security risks. They consider that consumers should be entitled to expect this. This, of course, raises additional concerns surrounding making the expected lifecycle of a product known. 

These issues sit firmly on the Commission’s radar and, on 21 November 2019, they published their Report on Liability for Artificial Intelligence and other Emerging Technologies, reviewing existing liability regimes and perspectives for future legislation on emerging digital technologies. 

Finally, the reliability and robustness of the data in training sets and quality assurance is also highlighted. BEUC propose that review and validation processes become subject to a common industry standard, meaning these proposals would have a direct impact on businesses should they be carried across into future legislation. This is an area of overlap with the Guidelines and therefore industry should prepare for such changes to enter into law over the coming years. 

Although beyond the scope of this article, it does not seem unreasonable for businesses relying on ADS to harbour similar expectations. Those businesses relying on ADS in respect of decisions they make that affect consumers are also likely to expect warranties from their suppliers in respect of their compliance with consumer protection legislation. 

The cross over with GDPR

It is easy to see how the aims pursued by BEUC overlap with those of General Data Protection Regulation (2016). Their proposals could comfortably by described a natural continuation of the work commenced by the regulation, a sort of GDPR ‘plus’.

Consider for example the proposed transparency, explanation and objection rights. BEUC’s proposal that consumers should be able to understand how decisions about them have been made, including what data is being processed, when and by whom are reminiscent of GDPR but would apply to all data (not just personal data). The likeness is amplified by the proposed right to request rectification of a wrong or unfair decision made by the ADS based on consumers’ data. Individuals should be able to object to the ADS and request human intervention. These provisions could be particularly problematic for businesses, depending on the scale of the data processed by the ADS. For example, businesses may not: 

  • know how the decision was reached (consider “accountability” and the current understanding of machine learning); and 
  • have the man power to process each customer’s the data without recourse to ADS.

However, it bears remembering that floodgate concerns are rarely proven – for example, the concern that every customer would request their personal data under Subject Access Requests. 

There are already extensive data protection considerations under GDPR when venturing into the sectors of automated decision systems and data processing. In particular, businesses undertaking these activities are required to undertake Data Protection Impact Assessments whenever they undertake: 

  1. Systematic and extensive evaluation/profiling of individuals with significant effects on those individuals; 
  2. Large scale processing of sensitive data; and
  3. Monitoring of a publicly accessible area on a large scale.

Any consumer rights legislation in respect of AI is likely to build on the rights already acquired under GDPR. The burdens are likely to increase, as the risks for consumers also increase. 

Next steps

Although the legislative framework remains embryonic, you should consider this growing trend towards the regulation of AI and review your internal processes. 

The EU Commission has promised to propose legislation on the regulation of artificial intelligence in the first 100 days of its mandate (from 1 November 2019). The EU’s normative power is far reaching. Once the EU does legislate on the regulation of AI, all products accessing its internal market will be subject to the EU’s rules. This means that any EU legislation, regulation or industry standard will continue to be relevant to UK developers hoping to distribute or commercialise their programs in the EU – regardless of the UK’s EU membership status.

Our specialised technology, privacy and data protection team are able to draft appropriate policies to help demonstrate your commitment to trustworthy AI and the protection of consumer rights, such as privacy policies, data processing and sharing policies. They can also assist with general commercial matters such as reviewing contracts or drafting commercial terms and conditions.