Why AI presents really concerning legal risks, and why you should use it anyway! (Part 1)
- Harry Jennings
- Apr 24
- 5 min read
Part 1. The arguments
We all know that AI products and services are not perfect. We have been told about inaccuracy and bias, we’re worried about ownership and infringement risks, and it seems like we might struggle to keep things confidential and protect privacy. Is this all just hyped technology that is sending us off in the wrong direction? We don’t think so, but there are arguments that are likely to arise in AI deals and relationships.
This is a two-part article. Part 1 looks at the arguments that could be made about data, AI and the various players involved in developing and commercially exploiting it. In Part 2, we will explore the application of law to these arguments and consider mitigating measures to deal with legal risks.
My input data was biased and now my AI system is biased too, and we have received a complaint.
These are possible arguments that could be made:
The person that provided the data has not met the promises they made.
The person that collected or provided the data should have checked it and cleansed or corrected it.
You should have made a better choice of input data.
You should have checked and then cleaned or corrected the data.
The AI system developer should have noticed the problem and highlighted it to you.
The AI system developer should have developed a system that accounted for bias and developed a way of minimising or correcting its effect on the output.
Some bias is a reality of the system, and does not make it a bad system. But you should have made users aware of it.
Everyone knows about potential bias issues. The system is defective and doesn’t meet the promises you made.
The system has not been developed in accordance with modern standards.
We have identified that the system gives a certain percentage of accurate responses. The rest are just plain wrong and could, in theory, cause problems.
These are possible arguments that could be made:
If it is giving inaccurate responses, then the AI developer can’t have trained it properly.
The problem seems likely to be within the training dataset, so this is the fault of the person who provided the data.
Your customer paid a lot of money to buy a product that doesn’t work. They want their money back and they are entitled to it.
The system is defective. The customer was entitled to rely on the system and it can claim its losses from you.
The business has approved us using the system to diagnose patients provided we always make sure a clinician checks the clinical advice before it is communicated. Despite best efforts, the AI got it wrong and a patient was misdiagnosed and injured as a consequence.
These are possible arguments that could be made:
The system is a medical device. Medical device regulations have been breached.
The system is defective, you need to stop using it immediately.
It was negligent to use AI for this purpose, bearing in mind the known issues.
Using AI in this context goes against generally accepted clinical practice.
It is impossible for a clinician to assess the AI’s proposed clinical advice, so they would need to do the full assessment themselves anyway (which they didn’t do). This is negligence.
Although the policy says that the clinician should check the diagnosis, that doesn’t happen in reality.
Even if it gets some results wrong, it’s still better than a human. A human would have missed that one too.
I didn't realise we weren't supposed to put confidential information into LLMs.
These are possible arguments that could be made:
You have breached a statutory obligation to keep the information confidential.
You have breached a contractual or general legal obligation to keep the information confidential.
Doing that is contrary to company policy, you may be disciplined or even dismissed.
Your breach may have put valuable technology at risk.
We might not be able to obtain a patent for an invention in that information.
That use was not permitted and may have infringed IP rights.
The information contained personal data. That's unlawful processing.
Ah, I just looked at the prohibited uses in the EU AI Act, I think we might be doing that already.
These are possible arguments that could be made:
Your system or services is available in the EU or to EU users and your website is directed (at least impliedly) to EU customers. You have breached the EU AI Act.
Even though the EU AI Act does not apply in the UK, it is a good indicator of practices that are not considered acceptable. You are likely to have breached UK laws or regulatory requirements.
Your use is unethical and likely to come under scrutiny one way or another. You should stop doing it.
Your development of the system was negligent, bearing in mind that the EU AI Act was published long ago.
We used an AI system to generate the logo for our new business, now someone else is using it and we have had a letter from another business claiming it’s theirs!
These are possible arguments that could be made:
You do not own the logo.
The logo or your use of it infringes the rights of the owner of original works which you copied.
The AI developer did not have the rights to use the training data it used. This is a breach of the promises it made to you, or at least negligence.
The AI system reproduced some of its training data. The system is defective and the developer was negligent.
The system is fine, but your use of it (possibly the way you prompted) caused the problem. It’s essentially a user error and not the AI developer’s fault.
Our new AI solution, which we developed with a strategic partner, has stopped working. It’s bad timing because we have all become reliant on it and the partner has decided to drop the product.
These are possible arguments that could be made:
The strategic partner should not have designed the system in that way or should have told you about the risk of obsolescence. That is negligence or breach of contract.
The partner is not permitted to simply drop the product. Walking away from the relationship will have consequences.
You should not have relied on the system yet.
You can use another AI expert to fix and maintain the system.
Ready for battle?
If you are a customer or provider of AI or any solution powered by AI, get used to these sorts of arguments. It’s a good idea to get your own arguments ready in advance. No need to be all “doom and gloom” about it, but some of these points are going to form the basis of discussions (possibly very awkward discussions) that you will have with customers, suppliers, collaborators and their advisors. Forewarned is forearmed. Think about your specific risks and make sure you can point towards the defences in your contract.
We love technology and we love contracts. Contact harry@hamiltonlawscientific.com to discuss contractual methods of allocating risk appropriately and protecting you from legal liability.
Comments