The first answers provided by chatbots were often based on American law.
At some point in your life, you are likely to need legal advice. A survey carried out in 2023 by the Law Society, the Legal Services Board and YouGov found that two-thirds of respondents had had legal problems in the last four years. The most common problems were employment, finances, welfare and benefits, and consumer issues.
But not everyone can afford legal advice. Of those interviewed with legal problems, only 52% received professional help, 11% received assistance from other people, such as family and friends, and the remainder did not receive any help.
Many people turn to the Internet for legal help. And now that we have access to artificial intelligence (AI) chatbots like ChatGPT, Google Bard, Microsoft Co-Pilot, and Claude, you might be thinking about asking them a legal question.
These tools are powered by generative AI, which generates content when a question or instruction arises. They can quickly explain complicated legal information in a straightforward and conversational way, but are they accurate?
We put chatbots to the test in a recent study published in International Journal of Clinical Legal Education. We inserted the same six legal questions on family, employment, consumer and housing law into ChatGPT 3.5 (free version), ChatGPT 4 (paid version), Microsoft Bing and Google Bard. The questions were those we normally receive in our free online legal clinic at The Open University Law School.
We found that these tools can indeed provide legal advice, but the answers were not always reliable or accurate. Here are five common mistakes we see:
1. Where does the law come from?
The first answers provided by chatbots were often based on American law. This was often not stated or obvious. Without legal knowledge, the user would probably assume the law relating to where they live. The chatbot sometimes didn’t explain that the law differs depending on where you live.
This is especially complex in the UK, where the laws differ between England and Wales, Scotland and Northern Ireland. For example, the law on rent a house in Wales is different from Scotland, Northern Ireland and England, while Scottish It is English courts have different procedures for dealing with divorce and the end of a civil partnership.
If necessary, we use an additional question: “is there any English law that covers this problem?” We had to use this instruction for most questions and then the chatbot produced an answer based on English law.
2. Outdated law
We also discovered that sometimes the answer to our question referred to outdated legislation, which has been replaced by new legal standards. For example, the divorce law amended in April 2022 to eliminate fault-based divorce in England and Wales.
Some answers referred to the old law. AI chatbots are trained on large volumes of data – we don’t always know how current the data is, so it may not include the latest legal developments.
3. Bad advice
We found that most chatbots gave incorrect or misleading advice when dealing with family and employment issues. Responses to housing and consumer questions were better, but there were still gaps in the responses. Sometimes they ignored really important aspects of the law or explained it incorrectly.
We found that the responses produced by AI chatbots were well written, which could make them appear more convincing. Without having legal knowledge, it is very difficult for someone to determine whether an answer presented is correct and applies to their individual circumstances.
Although this technology is relatively new, there have been cases of people using chatbots in court. In a civil case in Manchester, a litigant representing himself in court allegedly presented fictitious legal cases to support your argument. They said they used ChatGPT to find the cases.
4. Too generic
In our study, the responses did not provide enough detail for someone to understand their legal issue and know how to resolve it. Responses provided information about a topic rather than specifically addressing the legal issue.
Interestingly, AI chatbots were better at suggesting practical, non-legal ways to solve a problem. Although this can be useful as a first step in resolving a problem, it does not always work and legal action may be necessary to enforce your rights.
5. Pay to play
We found that ChatGPT4 (the paid version) was better overall than the free versions. This risks further reinforcing digital and legal inequality.
Technology is evolving and there may come a time when AI chatbots are better able to provide legal advice. Until then, people need to be aware of the risks when using them to resolve their legal problems. Other sources of help, such as Advice to Citizens will provide you with up-to-date, accurate information and will be in the best position to help.
All chatbots responded to our questions, but in their responses they stated that it was not their role to provide legal advice and recommended getting professional help. After carrying out this study, we recommend the same.
Francine RyanSenior Professor of Law and Director of the Open Justice Centre, The Open University It is Elizabeth HardieSenior Lecturer, Faculty of Law, The Open University
This article was republished from The conversation under a Creative Commons license. Read the original article.
(Except the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)
This story originally appeared on Ndtv.com read the full story