Search

How can we help?

Icon

AI and Recruitment

Over the last year we have witnessed a rise in the utilisation of AI as part of recruitment processes, in order to increase efficiency in the process, minimise human administration and scale processes.

However, whilst the benefits are clear, there are significant risks with using these tools which are starting to be identified, particularly in terms of unintended bias and discrimination. To assist employers who are using, or considering the use of, AI in recruitment, we have put together a summary of the key risks that employers should be aware of.

AI in Recruitment: Use Cases

Before jumping into the risks of using these tools, it’s good to have a basic understanding of how these tools are being utilised in recruitment, and the benefits this offers:

  1. Sourcing: Employers are utilising AI tools to create job adverts, identifying potential candidates, and filtering CVs.
  2. Screening: AI tools have been created which can evaluate candidate profiles against predefined criteria. This streamlines the administrative process of narrowing the pool for interview.
  3. Interview: Some employers are using AI tools to draft interview questions, and even conduct initial interviews in some cases. AI can also be used to analyse written and video interviews to identify strong responses against a predefined criteria.
  4. Selection: AI tools are being used to identify and select the best candidates based on qualifications and skills.

Legal Risks and Challenges

As AI develops, the use cases above will be expanded, and with this will bring even more tempting benefits for streamlining and simplifying the recruitment process. Whilst these leaps forward are impressive, and certainly useful for employers, it is imperative for anyone using these tools to be aware of the legal risks.

1. Unintended Bias and Discrimination:

The biggest risk with utilising AI in recruitment, is that of unintended bias and discrimination. As a learning model, AI utilises large historic databases to learn from and to make its decisions from. This model necessitates a large volume of historic data, which particularly in recruitment will include historic discriminatory hiring practices usually favouring majority groups and males. Where the AI has learnt from a pattern of characteristics that make up a suitable candidate for a role, it can result in decisions which inadvertently perpetuate these biases. For example, it may see that successful candidates for a CFO role in a large company over the last 30 years have been white males from private school backgrounds. This learned data may therefore result in it favouring similar candidates, even if these are not predefined criteria that the employer has requested.

As a result of this discrimination against applicants based on gender, age, race, or other protected characteristics is a significant concern.

2. Digital Exclusion:

One area of concern that has arisen as companies start to utilise these tools more, is the risk of it unfairly excluding candidates that lack proficiency in technology or access to it. This creates increased risks of discrimination due to age, and disability.

Employers need to ensure that where an AI recruitment process is utilised, they have the ability to make reasonable adjustments if required to allow those with disabilities to engage in the process, and that they are not unfairly excluding potential candidates.

Lucy Densham Brown

Solicitor

View profile

+44 118 960 4655

Whilst these leaps forward are impressive, and certainly useful for employers, it is imperative for anyone using these tools to be aware of the legal risks.

3. Data Protection and Privacy:

Another issue that has been raised regarding AI processes, is that of its compliance with UK GDPR and Data Protection laws. Recruitment process require these AI systems to process personal data and as such require compliance with data protection laws.

Key tenants of these laws include transparency, informed consent, and secure data handling. To be able to be compliant with UK GDPR, employers need to understand how AI is processing this data, and need to carefully consider the use case for this program and need. If as part of the process, employers are processing sensitive personal data, such as medical information, this will require even more safeguards.

Employers are encouraged to discuss the security systems of AI programmes with the third party supplier, as well as ensure their own checks, and to ensure that employees using these processes are suitably trained.

Mitigating Risks

To navigate these challenges, organizations should: While the benefits of AI processes are considerable, employers must strike a balance between efficiency and their legal and ethical obligations. In order to mitigate risks, steps employers can take include:

  • Assess AI Systems: Evaluate AI tools for fairness, transparency, and compliance.
  • Track performance: Use metrics and processes to monitor AI performance and track areas of high risk.
  • Choose Trustworthy Suppliers: Verify claims made by AI system providers and ensure you are aware how data is being processed and safeguarded.

This is a complicated area, but our team are available to provide support and advice, either in the initial stages of considering risks, or to help support when risks have been identified. Please do reach out to our employment lawyers if you want more information and we would be happy to help.

About this article

Disclaimer
This information is for guidance purposes only and should not be regarded as a substitute for taking legal advice. Please refer to the full General Notices on our website.

Lucy Densham Brown

Solicitor

View profile

+44 118 960 4655

About this article

Read, listen and watch our latest insights

art
  • 10 September 2024
  • Employment

Sun, Fun and fairness – Amanda Glover writes for Business Voice magazine

Amanda Glover in Business Voice magazine discusses how employees at Harrods, the iconic luxury department store in London, are considering strike action over what the workers deem to be a discriminatory annual leave policy.

art
  • 02 September 2024
  • Employment

Social Media – how private is your personal data

Nowadays most people have at least one social media account. Whether it’s Facebook or TikTok, X, or LinkedIn, most adults have an online presence.

art
  • 28 August 2024
  • Employment

The Next Equal Pay Decision: market forces not enough to justify differences

This is the first equal pay claim against a major retailer to be heard by the tribunal and therefore is expected to have wide-reaching consequences in how employers determine their pay practices.

art
  • 26 August 2024
  • Employment

Hidden Disabilities in the Workplace: Navigating Fatigue

It is estimated that in the UK in 2023 some 16 million people were disabled, including 23% of working adults. Of these, 70-80% are estimated to have invisible disabilities, also known as hidden disabilities.

art
  • 19 August 2024
  • Employment

Age Discrimination in the Workplace – too old to work?

The US Democratic Party have chosen Kamala Harris as their nominee for the 2024 presidential election, following incumbent president Joe Biden’s choice not to run again.

art
  • 13 August 2024
  • Employment

Strikes Act repeal: What are the legal implications – Monica Atwal writes for People Management

In People Management magazine, Monica Atwal outlines the changes employers need to be aware of following the government’s repeal of the Strikes Act.