AI and Data Protection – Is Fair and Transparent Privacy Possible?
- 23 October 2025
- Privacy and Data Protection
Every facet of daily life is governed to some degree by phone, web or some form of connected technology.
There is no question that advances in technology improve communication. The rate of information exchange now as compared to 10 years ago is staggering. Artificial Intelligence or AI and machine learning are revolutionising the way in which we interact with each other and conduct business.
But are these advances compatible with the principles of fairness and transparency under the UK GDPR?
Article 5 of the UK GDPR requires personal data to be processed lawfully, fairly and in a transparent manner.
Fairness is not defined, but the principle is understood to refer to the effect processing has on an individual’s rights and freedoms. The Information Commissioner’s Office, the supervising authority for data protection in the UK, considers that fairness means you “do not handle data in ways that people would reasonably not expect and not use it in ways that have unjustified effects on them”.¹
Transparency involves requiring any information relating to processing be easily accessible, easy to understand with clear and plain language to be used. This relates to the information an organisation must provide to an individual about data processing usually contained in a privacy notice.
Machine learning is now used in most everyday applications. Some common uses of machine learning are within search engines, browser applications, social media websites like X (formally Twitter), AI chatbots such as ChatGPT and smart assistants such as Amazon’s Alexa, Google Assistant and Apple’s Siri. At the forefront of machine learning is deep learning, which analyses massive amounts of data through networks that classify the data based on the results of the previous layer. The accuracy of the results (or model) depends on how much data is analysed, the larger the dataset, the more efficient or accurate the model.
xAI’s chatbot, “Grok”, was developed to answer user questions on X. The company was recently criticised after it produced posts pushing antisemitic narratives. xAI apologised and suggested that the reason for the program operating in this way was the data that the model was trained upon.
Another example is an early version of Amazon’s AI-powered hiring tool. This was found to be biased against female candidates as it had been trained on previously successful resumes, most of which were from male candidates.
Machine learning outputs can therefore discriminate and present a distorted view of the world depending on what and how much data is analysed. If the datasets that are used are limited or incorrect, then the model could be inherently biased. Because of this concern, the UK GDPR has restricted the use of automated decision making (ADM) without human intervention when it affects significant decisions of individuals.
Automatic decision making in this context could include the refusal of credit by credit providers or assessing job applications by online recruiters.
Such automated decision making is permitted under the UK GDPR where it is either authorised by law, the person has given explicit consent or the automation is necessary for the performance of a contract. However, the passing of the Data (Use and Access) Act 2025 (DUAA) has overhauled the ADM environment in the UK. The general prohibition under the UK GDPR has been relaxed, with this rule now only remaining for significant decisions involving special category data. The DUAA affords UK businesses significantly greater scope to use ADM by allowing them to rely on ‘legitimate interests’ as their lawful basis for processing personal data.
Artificial Intelligence or AI and machine learning are revolutionising the way in which we interact with each other and conduct business.
The DUAA has also introduced mandatory safeguards, requiring companies to inform individuals that ADM is taking place, and empowering individuals to challenge ADM after the decision has been made. This framework grants UK businesses more regulatory freedom to adopt AI and ADM, whilst also providing a robust system for affected individuals to challenge decisions.
Another challenge to compliance is the inherent opacity of sophisticated technologies. Whilst organisations have an obligation under the UK GDPR to inform individuals about the nature of any processing, this is often not possible or feasible because of technical complexity or the many layers of collection. Customers are not going to want to read lengthy detailed privacy notices, and many will not understand the details even if presented to them.
Clearview AI Inc. uses an AI system to act as a search engine for faces, used primarily by law enforcement and other government agencies. Users can upload an image of a person’s face, and Clearview will search through its extensive database to find matches. Clearview created its library of faces, which is over 20 billion strong, by gathering images from publicly available online sources, including social media. Many people may not be aware that their images uploaded to social media may form part of this database and are being actively used by Clearview’s AI model.
Clearview has successfully overturned an ICO enforcement notice and fine on the basis that it only works for security and law enforcement clients outside of the UK/EU.
Concerns surrounding the use of AI technologies were recognised by 122 of the world’s Data Protection and Privacy Commissioners at the International Conference of Data Protection in AI held in October 2018. In their declaration, the Commissioners endorsed the promotion of principles such as those found in the UK GDPR of fairness and transparency.
They called for common governance principles on AI to be established at an international level. As a first step, the Conference established a permanent working group, the Working Group on Ethics and Data Protection in Artificial Intelligence. It released a report in July 2022.
It is hoped that these wider issues will continue to be considered at government level, but also on a voluntary basis by the global corporations which are both the purveyors and consumers of our data. Co-operation at all levels will be required if real data protection compliance is to be achieved.
Keep up to date with the latest tips, analysis and upcoming events by our legal experts, direct to your inbox.
Disclaimer
This information is for guidance purposes only and should not be regarded as a substitute for taking legal advice. Please refer to the full General Notices on our website.