Artificial intelligence in public services

Advice and Guidance

Which countries is it relevant to?

    • Great Britain flag icon

      Great Britain

Who this guide is for

This guide is for anyone in a public body in England, Scotland or Wales who is:

  • procuring, commissioning, building or adapting artificial intelligence (AI) for their workplace or the services they provide
  • responsible for making decisions on whether and how to introduce AI
  • responsible for using AI, or providing oversight or scrutiny of any service using AI
  • responsible for training staff who are using AI

It will also be useful to:

  • anyone responsible for developing or using AI as part of a service they are delivering on behalf of a public body

What this guide covers

This guide provides:

  • an overview of what artificial intelligence is
  • guidance on how the Public Sector Equality Duty applies when a public body uses artificial intelligence
  • a checklist for public bodies in England (and non-devolved and cross-border public bodies

It does not cover how inappropriate use of AI may lead to breaches of other laws, such as the Data Protection Act 2018 and the Human Rights Act 1998.

What artificial intelligence is

Artificial intelligence, machine learning and automated decision-making are terms that refer to a wide range of technologies used across the private and public sectors. In this guide, we refer to this technology as artificial intelligence (AI).

AI is the science and practice of using computers to support decision-making or the delivery of services and information. It involves programming computers to sift large volumes of data and learn to answer questions or deal with problems. For example, in the public sector this could include using programmes to help allocate benefits or to estimate the risk of an individual committing fraud.

Facial recognition software is an example of using AI. It involves checking people’s faces for pre-existing images that are already held on a database. It can be linked to a network of cameras and is used by the police in some areas with high crime levels and at border crossings for checking the identity of travellers.

AI and new digital technologies are transforming how public services are delivered. They have the potential to improve equality, but they may also lead to discrimination.

If public bodies don’t take steps to guard against this, they may face reputational damage and legal action for breaches of the Equality Act 2010, including the Public Sector Equality Duty (PSED).

Benefits of artificial intelligence

By combining multiple data sets, AI can enable more informed decision-making, while reducing the likelihood of human error. For example, an AI system is likely to make more consistent decisions based on the same information than people will. Automating decisions can help to reduce staff costs and is not limited by the constraints of normal working hours. AI can help to target services more efficiently and quickly.

Risks of artificial intelligence

AI systems may lead to discrimination and deepen inequalities. Discrimination may happen because the data used to help the AI make decisions already contains bias. Bias may also occur as the system is developed and programmed to use data and make decisions. This process is often referred to as ‘training’ the AI. The bias may result from the decisions made by the people training the AI. Sometimes the bias may develop and accumulate over time as the system is used.  

Example: a police force arrests younger people and Black people more than others. Most of these people are released without charge but their facial images are kept and used to train the force’s facial recognition technology, which is used to inform police tactics such as where police officers are deployed. The police force realises that the facial recognition technology is influencing who it arrests and may be reinforcing existing bias. This is happening because the AI is being trained using very limited data. The force suspends its use of facial recognition technology pending a review of the options to retrain the system using new data that is free from bias. It also monitors how the new system works to make sure it does not lead to discriminatory outcomes.

AI can be complex. When public bodies are buying systems there is a risk that their staff will not understand how it works or makes decisions. Where this is the case, it may be difficult to make sure the AI is working as intended and making decisions fairly.

UK government resources on artificial intelligence

Last updated: 01 Sep 2022