- 02 December 2024
- Litigation and dispute resolution
Felicity Harber v The Commissioners for HMRC (2023) UKFTT 1007 (TC )
In this recent case, the First-Tier Tribunal (Tax Chamber) gave a stark warning to litigants about use of AI in litigation. Ms Harber, a litigant in person, had failed to notify and pay HMRC the relevant Capital Gains Tax (CGT) due following the sale of a property. This subsequently led to a penalty of £3,265.11 being imposed, which the applicant appealed, seeking to rely on having a reasonable excuse for her failure to pay the CGT.
In an attempt to win the appeal, Ms Harber used nine fictitious cases she had found on ChatGPT, an Artificial Intelligence (AI) generated tool. When she put the cases to the Tribunal at the hearing, judges were unable to locate any of them on any available databases. Ultimately, Ms Harber lost her case. The Tribunal concluded that she did not have a reasonable excuse for her failure to notify HMRC of her liability for Capital Gains Tax. Even though the Tribunal accepted that she had used the nine decisions innocently, it nonetheless gave a strong caution against litigants using AI technology, highlighting the danger it poses.
The dangers of AI
As we witness the inevitable expansion and adoption of AI, we are reminded that such a significant tool is not without its risks. Generative AI produces a range of content including images, text, videos and other media sources from the information and data it receives. AI can be a legitimate legal research tool, especially for litigants in person who may not have access to professional legal databases. However, as shown from the above case, one of its problems lies in accuracy. With the wealth of information AI absorbs, it has the ability to distort facts and form illusions, and the data it is fed with may contain bias and false patterns.
Whether a litigant in person or a legal professional, a cautious approach must be taken when utilising AI for research in legal cases. Using false information will not only affect the outcome for the parties who rely on it, but will also impact on credibility and reputation. We anticipate judges will take a tough approach towards the incorrect or inappropriate use of AI technology in the future.
Fraud. AI tools will and are being used in fraudulent activities.
Other Areas
Legal research is not the only area in which AI poses a threat going forward. Other areas to watch out for include:
- Privacy and confidentiality. To generate content, AI relies on data and information. In turn, AI may generate content which includes private data.
- Fraud. AI tools will and are being used in fraudulent activities by way of, for example, using real and/or fake data to create realistic scams.
- Intellectual property. With the wealth of data and content that AI tools possess, we are highly likely to see an increase in intellectual property litigation, for example with regard to who owns the rights to AI generated content.
If you have any further questions in relation to legal research and/or litigation more generally, please contact the Dispute Resolution solicitors.
About this article
-
SubjectThe Era of AI
-
Author
-
ExpertiseLitigation and dispute resolution
-
Published02 December 2024
Disclaimer
This information is for guidance purposes only and should not be regarded as a substitute for taking legal advice. Please refer to the full General Notices on our website.
About this article
-
SubjectThe Era of AI
-
Author
-
ExpertiseLitigation and dispute resolution
-
Published02 December 2024