Search

How can we help?

Icon

AI and Data Protection – Is Fair and Transparent Privacy Possible?

We live in a digital world.

Every facet of daily life is governed to some degree by phone, web or some form of connected technology.

There is no question that advances in technology improves communication. The rate of information exchange now as compared to 10 years ago is staggering. Artificial Intelligence or AI and machine learning are revolutionising the way in which we interact with each other and conduct business.

But are these advances compatible with the principles of fairness and transparency under the General Data Protection Regulations (GDPR)?

Fairness and Transparency

Article 5 of the GDPR requires personal data to be processed lawfully, fairly and in a transparent manner.

Fairness is not defined but the principle is understood to refer to the effect processing has on an individual’s rights and freedoms. The Information Commissioner’s Office, the  supervising authority for data protection in the UK considers that fairness means you “do not handle data in ways that people would reasonably not expect and not use it in ways that have unjustified effects on them”.¹ 

Transparency involves requiring any information relating to processing be easily accessible, easy to understand with clear and plain language to be used. This relates to the information an organisation must provide to an individual about data processing usually contained in a privacy policy.

Machine learning is now used to some degree in most every day applications. Some common uses of machine learning are within search engines, browser applications, social media websites like Facebook, AI chatbots,  smart assistants such as Amazon’s Alexa,  Google Home and Amazon Echo. At the forefront of machine learning is deep learning which analyses massive amounts of data through networks that classify the data based on the results of the previous layer. The accuracy of the results (or model) depends on how much data is analysed; the larger the dataset, the more efficient or accurate the model.

You may have heard about Microsoft’s AI chatbot, Tay which was developed to interact with young people on Twitter. The chat bot was shut down after 16 hours as a result of the results of its learning. After interacting with a number of users, it was taught to post offensive, sexist and racists tweets. This was due to the limited number and nature of the machine’s interactions with users.

Machine learning outputs can therefore discriminate and present a distorted view of the world depending on what and how much data is analysed. If the datasets that are used are limited or from one sub-set of the population, then the model could be inherently biased.

Because of this concern, the GDPR has restricted the use of automated decision making without human intervention when it affects significant decisions of individuals.

Automatic decision making in this context could include the refusal of credit by credit providers or assessing job applications by on-line recruiters.

Such automated decision making is permitted under the GDPR where it is either authorised by law, the person has given explicit consent or the automation is necessary for the performance of a contract.

In the latter two cases, the person must have the right to obtain human intervention and a review of the decision. Consistent with the right to be informed, a person ought be made aware that they will be the subject of automated decision making.

How organisations are meeting these obligations in practice is difficult to gauge; an individual is wholly reliant on the organisation to inform them as to whether any decisions are being made by solely automated means.

This leads to another challenge to compliance which is the inherent opacity of sophisticated technologies. Whilst organisations have an obligation under the GDPR to inform individuals about the nature of any processing, this is often not possible or feasible because of technical complexity or the many layers of collection. Customers are not going to want to read lengthy detailed privacy policies and many will not understand the details even if presented to them.

Recently Google revealed that its staff listen into conversations through Google Home speakers. This was according to Google to improve speech technology by transcribing sets of enquiries. Google also revealed that the devices had at times functioned so that private conversations were being recorded by accident without the user’s knowledge.

Similarly, many people may not be aware that Amazon retains transcripts of conversations held through its voice based assistant, Alexa again apparently in order to train the AI’s responses.²

Artificial Intelligence or AI and machine learning are revolutionising the way in which we interact with each other and conduct business.

Would people reasonably expect their private conversations to be retained and read by third parties?

Both organisations have since come out publicly to defend the technology and said they were investigating.

Concerns surrounding the use of AI technologies were recognised by 122 of the world’s Data Protection and Privacy Commissioners at the International Conference of Data Protection in AI held in October 2018. In their declaration, the Commissioners endorsed the promotion of principles such as those found in the GDPR of fairness and transparency.

They called for common governance principles on AI to be established at an international level. As a first step, the Conference established a permanent working group, the Working Group on Ethics and Data Protection in Artificial Intelligence. It has yet to release any formal publication.

It is hoped that not only will these wider issues be considered at government level, but also on a voluntary basis by the global corporations which are both the purveyors and consumers of our data. Certainly co-operation at all levels will be required if real data protection compliance is to be achieved.

 

¹ ICO Principle (a): Lawfulness, fairness and transparency: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/lawfulness-fairness-and-transparency/

² Mirror Tech by Shivali Best, Martyn Landi, July 12: https://www.mirror.co.uk/tech/google-admits-staff-listen-customer-17997364

 

Disclaimer
This information is for guidance purposes only and should not be regarded as a substitute for taking legal advice. Please refer to the full General Notices on our website.

Author profile

About this article

Read, listen and watch our latest insights

art
  • 15 August 2025
  • Employment

Employment Rights Bill – Get your tailored action plan now!

The Employment Rights Bill is a major piece of legislation which significantly overhauls worker’s rights.

art
  • 13 August 2025
  • Commercial Real Estate

Proposed Ban of upwards only rent reviews

In an effort to save the high street, the government has proposed to ban upwards only rent reviews in commercial leases, without any consultation with professional bodies. It has caught the commercial property sector completely by surprise.

art
  • 12 August 2025
  • Privacy and Data Protection

From WeTransfer to WhatsApp: How Unapproved Tools and “Shadow IT” Could Threaten UK GDPR Compliance

Businesses and self-employed professionals are in a constant pursuit of efficiency and productivity.  There are, as a result, no end of tools and products available to smooth digital workflows. 

art
  • 07 August 2025
  • Immigration

New simplified British Citizenship route for Irish Citizens now in force

From 22 July 2025, eligible Irish citizens who have been resident in the UK for five years can now register as British citizens under a new, simplified route.

art
  • 06 August 2025
  • Employment

Enhanced redundancy packages explained

It is difficult for employees and employers alike when the time comes to make redundancies across a business. For those impacted, it can be particularly difficult to understand the terms used, and what your entitlements are as an employee.

art
  • 06 August 2025
  • Litigation and dispute resolution

Product liability reform: New Product Regulation and Metrology Act 2025

The law on product safety is set to undergo reform as the new Product Regulation and Metrology Act 2025 was passed in July.