The AI Act is currently being discussed by EU countries with the EU Parliament aiming to reach a decision by the end of 2023. The reported objective of the Act is to ensure that AI developed and used in Europe is fully in line with EU rights and values including human oversight, safety, privacy, transparency, non-discrimination and social and environmental wellbeing.

The AI Act is the proposal by the European Commission for the first EU regulatory framework for AI. If enacted, it will likely govern EU member countries, which excludes the United Kingdom. However, if your UK based business transcends jurisdictions and is operational (even commercially) in the EU, then this new framework (if enacted) could impact your business. It is important to note that, at the time of writing, there is no plan for a mirrored act in the United Kingdom. The UK is intent on its AI-innovation approach and the government’s current strategic vision is to make the UK a science and technology superpower by 2030.

Different rules for different risk levels

The Act seeks to establish obligations for providers and users of AI tools depending on the level of risk from the use of artificial intelligence.

Unacceptable risk

Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include:

  • Cognitive behavioural manipulation of people or specific vulnerable groups
  • Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
  • Real-time and remote biometric identification systems in publicly accessible spaces, such as facial recognition

There may be some exceptions to aid with the prosecution of serious crimes but only after court approval.

High risk

AI tools or systems that negatively affect safety or fundamental rights will be considered high risk under the Act and will need to be assessed before being put on the market.

High-risk systems are intended to be divided into two categories:

  • AI systems that are used in products falling under the EU’s product safety legislation, like toys, aviation, cars and medical devices
  • AI systems falling into eight specific areas that will have to be registered in an EU database
  • Biometric identification and categorisation of natural persons
  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, worker management and access to self-employment
  • Access to and enjoyment of essential private services and public services and benefits
  • Law enforcement
  • Migration, asylum and border control management
  • Assistance in legal interpretation and application of the law

Limited risk

Anything that does not fall into the above categories, is likely to be considered “limited risk”. Such limited risk systems will come with minimal transparency requirements centred around allowing users to make informed decisions.

Generative AI

Generative AI, like ChatGPT, BARD, Adobe Firefly will have to comply with transparency requirements such as:

  • Disclosing that the content was generated by AI
  • Designing the model to prevent it from generating illegal content
  • Publishing summaries of copyrighted data used for training

The last bullet point should unveil whether the company or individuals who built the tool have broken copyright laws in building and training the AI.


Ultimately, the AI Act (if enacted) will not directly impact UK businesses insofar as their operations in the UK are concerned. However, if your UK based business is operational in the EU (i.e., the provision of software tools to customers in the EU), this new framework is likely to impact your business and if your system is considered “unacceptable risk” it will be banned. If they are high risk, they will need to be assessed before commercialisation in the EU.

If you are in the process of developing or commissioning the development of software featuring AI and you are intent on globalisation or entering the EU market, it may be wise to consider building in compliance with the proposed AI Act (or mirroring the prohibitions) in any software development agreement.

Amy Ralston is an Associate at Stephens Scown specialising in intellectual property and technology law. Amy regularly helps businesses understand its intellectual property position and helps formulate a strategy for commercialisation.

If you are seeking advice or have any questions in relation to this article, you can contact us by calling 0345 450 5558 or by emailing

Alternatively fill out the form below and we’ll get in touch right away.


If you have any further inquiries regarding The AI Act, please contact our Intellectual Property, Data Protection and Technology team we would be happy to help.