top of page

Legal Considerations For Using ChatGPT

Updated: Nov 23, 2023


Legal considerations for using ChatGPT

For a few months now, it’s been almost impossible to go more than a day without someone mentioning ChatGPT. This new AI-driven language processing tool has propelled us in a huge leap towards everyday artificial intelligence use in the mainstream. As with any technological change poised to alter the way we work, opinion on how, and even whether, we should be utilising this technology is very split.

ChatGPT is just one of many similar language processing tools, with others including Google Bard and Jasper.ai. Currently, most of the conversation has revolved around ChatGPT, perhaps because it is considered superior because it's free to use or maybe just because it’s the first to be experimented with by the wider general public. So, for the benefit of this article, we will focus on ChatGPT though much of what we discuss will apply to other current and even future AI-driven processing tools.


What Is ChatGTP And How Could It Help Businesses?


Before we get into the potential legal issues and ethical considerations, let’s talk about why ChatGTP has been such a hit and some of the benefits it might bring to our working lives.

ChatGTP is designed to enable human-like conversations whereby you may ask the tool questions and give it instructions. Having been fed with insurmountable data and information, the tool can gather research at exceptionally high speed. Furthermore, it is able to regenerate this knowledge into various formats. For instance, it can write an article, an email, it can put together a lesson plan, write a sales brochure or an essay. Plus a whole lot more - the possibilities are vast.

ChatGTP responds to prompts so its effectiveness and accuracy are highly reliant on the detail and clarity of user requests. It can take quite a bit of back and forth to develop content close to what is desired and users can request specific focuses, styles, tone of voice and much more to generate content. This could be for research, ideas and options to be used as a first draft or springboard for creating original work, or some users may prefer to use ChatGTP entirely to create the finished work.

Advanced AI tools will inevitably appeal to businesses as they could save time. For example, rather than investing time in writing, editing and repurposing training plans manually, a trainer can use AI to assist, freeing their time up to focus more on the delivery of the training.

Now, I could list all the ways that ChatGPT could help businesses, but I thought I'd see what ChatGPT had to say.

I asked the tool, ‘How could businesses benefit from using ChatGPT?’ and this was its response:


Business benefits from using ChatGPT

Actually, the first response was more comprehensive but I asked the chatbot to make it more concise.


What Are The Ethical Issues Around ChatGPT?


Undoubtedly, AI is going to change the way we work. Perhaps the way we socialise, learn, study and even date too. So, whilst there are many potential benefits to technology that can make work faster and potentially more efficient, ethics must be considered.


As usual, the technology has come before the discussion, as happened with the rise of social media. Rather than decide as a society whether we want something, instead we get given it and then have to debate over how it should be used and ask that inevitable question, ‘Where might it take us? (and do we want to go?)’.


Melvin Kranzberg once said ‘Technology is neither good nor bad; nor is it neutral.’ This applies very well to ChatGPT and its AI competitors. As much of a mystery as AI may seem, it does not just spring up from nowhere. It is plied with tons of information and this is both facilitated by humans and the information installed is written by humans. Therefore, AI carries bias.


Furthermore, since ChatGPT responses are based on information up until the date of September 2021 (as of the point of publishing - July 2023), there is likely to be a gap and this could mean that modern values are not as well represented in AI responses.


AI is not always able to discern content that may be sexist, ableist, racist and such but it could have access to such information and this may be used by chatbots to generate responses. However, this is likely to be picked up by the user. Bias though, is harder to detect and because history and information have largely been written by and for certain types of people, AI has been accused of serving up whitewashed content or of dismissing contributions from minorities. For this reason, there are concerns that AI tools could hinder progresses we are making toward social justice, equal opportunities and fair representation. AI is not designed to be biased, but it cannot yet recognise bias.


One of the major concerns around AI is how it might disrupt the job market. Whilst technological advances have led to this many times before, there are differences with AI and those in creative professions are especially concerned that it is not only the jobs that will be lost but also that outlet of creativity such-minded people rely on for wellbeing as much as income.


Copyright is a legal issue which we will soon move on to, but this is also an ethical issue. ChatGPT generates content at lightning speed but it does this by filtering through and utilising the information it has to hand. This is usually written by people and there are questions over whether this access to and repurposing of other people’s work is ethical, even if it is, or is in the future deemed to be, lawful.


It is important to note that there are also positive ethical benefits from language processing tools such as ChatGPT, particularly for neurodivergent people. Those with conditions such as ADHD, autism, dyslexia, etc may find AI capable of making day-to-day living far easier. Using ChatGPT as a springboard to ease into writing content or beginning to plan or even to spark some ideas can help to ease overwhelm. This may be of particular use to neurodivergent people but could help others too. In our increasingly busy modern world, where productivity is paramount and the pressure of workload is ever-increasing, AI tools like ChatGPT have the potential to take away some of the stress and clear parts of our mental load, so that we may funnel our unique skills in more effective ways.


Legal Considerations For Using ChatGPT


You’ll want to be sitting comfortably because we are about to enter a minefield! As with anything new, the law leans in poised to be called upon to decide the new rules. We put out the fires and use what we have learnt to safeguard for the future and create policy around it. There is, currently, little to no legislation on AI but legal cases are starting to be submitted and over the next few years policies and guidance will be created largely influenced by the outcomes of these initial disputes. There are many areas to consider in terms of where AI may breach current law and steer future legislation. For now, we’ll look at the two key areas of law most potentially impacted by AI:


1. Data Protection and ChatGPT


Every user of ChatGPT, is also a trainer of ChatGPT. Confused? Allow me to explain...


The chatbot is not a fixed programme it is an AI tool and therefore it continues to learn, and it learns from you. This is one of the reasons the tool is so useful because the information and requests you input dictate how it responds. It will continue to learn, adjust, and, as it gets smarter, it will be able to more accurately respond to your specific needs. This means it can learn your tone of voice and can cite your preferred sources.


However, this can also be a potential legal minefield because ChatGPT is a two-way conversation and everything you input is stored as information and not only available to you, but to all users of the service.


Users may unknowingly, therefore, share confidential information that would then be at risk of being distributed, potentially breaching data protection.


Italy has banned ChatGPT, pending investigation, over concerns regarding how ChatGPT collects, stores and uses data. Samsung have also restricted the use of the chatbot internally, having found that staff had uploaded sensitive code to ChatGPT.


Any copyrighted work, code, data or other information you input into ChatGPT becomes part of the AI’s knowledge and, more importantly, it is free to share this information with any other user. This makes it very easy for individuals to share information with the AI without full consideration of how this may breach data protection.


To take an entirely fictional example, a social worker wanting to utilise AI to help them build a report may put sensitive information regarding a protected case into ChatGPT. This information then may be later pulled for another user in response to another question and this could be published within an article, therefore publicly sharing restricted information. Not only could this have significant legal consequences but it could also put a vulnerable person at risk.


2. Copyright and ChatGPT


A major question around AI is how to deal with copyright law. Of course, we’ve been here before. With Napster and other music streaming services in the early 00s. As the services went paid, solutions were found and compromises made so that artists were not being exploited. However, this took some time and technology did much to disrupt the music industry. Whether that was beneficial or detrimental is still widely disputed. ChatGPT and its competitors may present even more of a complex issue for copyright protection.

Copyright law can be difficult to navigate as it is. You can quote and include snippets generally as long as you have permission or cite sources. When you’re using an AI tool to gather information for you though, how can you be sure that it isn’t directly lifting content from other places? And just how much content can be used from a particular source until it breaches copyright?

These are big questions and there is an ethical aspect that will likely feed into the legal issues too. In the UK, copyright is granted automatically, meaning if you created it, you own it. People may be inspired by what you wrote or they may use your research or insights to support their arguments but they cannot republish your content without permission. Certainly, they cannot pass it off as their own. Artists, writers, academics, researchers and many others rely on copyright law to secure their income. It gives their work monetary value and thus secures it as a profitable profession. However, if AI is crawling websites and plucking content at will then this could raise issues in regards to copyright ownership and cause significant problems for content producers.

Comedian and writer, Sarah Silverman, has recently filed lawsuits in the US against OpenAI and Meta, accusing it of infringing on her copyright. The tool’s ability to summarise her books has led to accusations that the AI tool has been given access to piracy websites where her work was unlawfully published. As yet, the corporations behind the AI tool have neglected to disclose where the information was sourced from and so we are in unchartered territory as to whether Silverman will have to prove they obtained access to her work unlawfully or whether OpenAI will have to disclose exactly how the AI accessed the information. Either way, this case looks set to be the beginning of shaping how the legal world will oversee this new rise in AI capability.

If you’re using AI to generate content or conduct research, one thing you can do to protect yourself is to ask ChatGTP to cite its sources. Knowing where information was gathered from will be important in shielding yourself from breach of copyright when using AI tools. However, as ChatGTP itself warns, it’s for you to decide what can and cannot be used lawfully. If you need assistance with this, please get in touch for copyright guidance when using AI.

False Information And ChatGPT


Fact-checking in the modern world is as essential as it is difficult and AI is set to make this even more challenging. ChatGPT and similar chatbots are designed to generate human-like responses, so perhaps we shouldn’t be all that surprised that occasionally they completely make things up, or at least embellishes. Two lawyers in America learnt this the hard way.


In utilising ChatGPT to draft a case brief, lawyers at the firm Levidow & Oberman unwisely trusted the bot to find legal precedents to support a case. It was only when the court was unable to find any information regarding cases specified that it became evident they were fictitious. Examples given that did have a basis in truth were also found to carry errors and misleading information. This failure to check the integrity of the information provided by ChatGPT cost the law firm $5,000 but could have had far more dramatic consequences had these fictional cases not been flagged so early on.


False or misleading information can easily be provided by ChatGPT and other chatbots. So, although it is a very useful prompt and fast way of gathering research, professionals need to have a proper system for fact-checking when using AI to aid their work.

Can ChatGPT Be Used To Provide Legal Advice?


ChatGPT can be used to learn more about the law and provide guidance. You might ask, for instance, what should be included in certain documents. Yet, ChatGPT itself claims it should not be used to create or edit legal documents. This is how ChatGPT responded when I asked if it could create legal contracts for me:

Can ChatGPT draft legal contracts?

Fortunately, Aubergine Legal are on hand to help with any of your commercial law queries and contract requirements so please do get in touch.


Finally, take a look at my previous blog with my legal tips for using AI in your business here - this has a handy checklist on practical things to think about when using AI.

Comments


bottom of page