ChatGPT — Answers that generate more questions

--

The content of this article does not constitute legal advice and should not be relied upon as a source of legal advice. **

Photo by Andy Kelly on Unsplash

Have you heard about the new trained model which can interact with you in a conversational way, “answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests”?

OpenAI (the company that is also behind the AI-art generator Dall-E) launched their new chatbot, ChatGPT’s research preview in December 2022, and it quickly became extremely popular. So much so, that I received several similar messages to the below before I could try it out.

I must admit I found amusing (and well-placed) the idea of guided meditation while waiting for accessing ChatGPT. A couple of years ago I would have laughed if someone told me that I would have an insightful conversation with a chatbot. But well, here we are, if ChatGPT can admit its mistakes, a human like me should too.

So, what really is ChatGPT?

ELIZA was one of the earliest, a very basic Rogerian psychotherapist chatbot and it was an early trial case for the Turing Test (proposed by Alan M. Turing, an English mathematician in 1950, to test whether a machine can “think like a human”). There was definitely room for improvement if we consider the below responses I received from Eliza.

ChatGPT, developed by OpenAI, is a powerful language generation model (large language model/LLM) that has been trained on a vast amount of text data. This allows it to generate highly coherent and contextually appropriate, “human-like” text for different contexts and audiences. These make it useful for a wide range of applications, such as chatbots, automated writing, and language translation.

We can generate a news article, a short story, a conversation, an essay, poems and more. This potentially makes it a valuable tool for (or instead of?) content creators, marketers, and other professionals who need to generate large amounts of text quickly and efficiently.

ChatGPT has been fine-tuned for a wide range of specific tasks, such as question answering and summarisation. However, it is important to be aware of its limitations and to use it responsibly to ensure that it is used ethically and effectively.

Photo by Mahdis Mousavi on Unsplash

What is it able to do?

ChatGPT works using a predictive mechanism, meaning that it predicts what words should come next after a user inputted a prompt. There is a list of examples of ChatGPT’s ability on OpenAI’s website. These examples are:

· It can write and fix a code (including asking clarifying questions), detect software vulnerabilities and parse machine language into readable code. And ChatGPT can write new code following English-language instructions.

· It will politely remind you that it is never okay to commit any crimes but can give you basic tips on how to protect yourself.

· It will understand if you ask follow-up questions, and you refer to the subject of your previous question.

· It can redraft messages for you, write poems, and will even clarify the limitations of its ability.

That is impressive as it displays a diverse talent that not every human can compete with (for example, I still cannot code). But what does this really mean for us? John Naughton, in his recent article posted on The Guardian’s website, states that “it’ll soon be as mundane a tool as Excel”.

Even though it is easy and fun to use ChatGPT, it is always interesting to know how the tool we use finds the answers for us, how does it actually work. It is a complex system, so this article will provide a brief summary of the methodology only and will not deeply analyse technicalities.

The GPT-3 language technology, the base of ChatGPT, was trained with 175 billion parameters/600GB of data on an Azure AI supercomputing infrastructure. GPT-3 stands for (the third version of) Generative Pre-Trained Transformer (GPT). It means that it is using an algorithm that had been pre-trained, working from a certain amount of data collected from the internet. It also uses a form of machine learning, called ‘unsupervised learning’, meaning that there is no human intervention to validate output variables. It has many benefits but also challenges, as it comes with a (i) higher risk of inaccurate results, (ii) and leads to a lack of transparency into the basis on which data was clustered. (This raises further questions about the debuted “right to explanation”.)

OpenAI additionally used the Reinforcement Learning from Human Feedback (RLHF) methodology, which means they involved human AI trainers to also create conversations. They then mixed those with the GPT-3 model-written completions, and the AI trainers ranked the collected comparison data. After that they fine-tuned the model using Proximal Policy Optimization based on these reward models. Adding RLHF enabled OpenAI to fine-tune and update the GPT-3 technology. That is why they call it GPT-3.5.

Picture from OpenAI’s website.

Can ChatGPT answer legal questions?

Apparently, it can and in fact it already has. A Columbian judge, Juan Manuel Padilla claimed he used ChatGPT when considering whether an autistic child’s insurance should cover all of the costs of his medical treatment. He did not completely rely on AI, but it is nevertheless a controversial admission.

Especially, because ChatGPT scours text across the internet to generate informed responses but it was proven to answer differently to the same question on separate occasions. It also provides fabricated (false) information to make inventive and compelling lies (such as adding references to non-existing articles).

ChatGPT can understand and respond to legal questions. However, it is important to note that while it may be able to provide information on legal topics, it should not be considered a substitute for legal advice from a qualified professional.

The text data ChatGPT was trained on includes legal texts such as laws, statutes, and case law, so it can quote legal concepts and terminology. However, the accuracy of its responses will depend on the quality and relevance of the training data it has been exposed to. Importantly, as laws and regulations vary from country to country and change over time, the answers that ChatGPT can provide may not always be accurate or up to date. Also, as legal cases are complex, and the outcome of a case can depend on several factors, it’s not something a LLM can predict. It’s therefore recommended to verify any information it provides before taking any legal action.

To be honest, it is unlikely that ChatGPT or any other LLM will replace legal professionals in the near future. Why not? Because legal professionals, such as lawyers and judges, have years of education and experience in the legal field. They are trained to interpret and apply laws, statutes, and case law (in common law countries) to specific situations. They also have the ability to analyse complex legal issues, identify key facts and arguments, and make sound judgement. Moreover, legal proceedings are not only about providing the right information, but also to use it in the right way. Legal professionals are trained to use legal strategies and tactics to achieve the best outcome for their clients. They also have the ability to represent their clients in court, negotiate with opposing parties, and advocate for their clients’ interests. Additionally, law can be interpreted differently, and the outcome of a case can depend on the jurisdiction, the law at the time, and many other factors that a LLM can’t fully capture. ChatGPT and other LLMs can be used as a tool to assist legal professionals in their work, but they cannot substitute them.

Additionally, the UK Solicitors Regulation Authority (in the SRA code of conduct) describes the standards of professionalism expected from legal professionals. This includes confidentiality and competence to provide specific legal advice. ChatGPT and similar technologies may be cheaper (or free options) to get legal advice, but they will need to build up the users’ trust first.

Photo by Hunter Harritt on Unsplash

What are some of the concerns about using ChatGPT and other LLMs?

Despite its many benefits, ChatGPT is not without its limitations. It can be susceptible to errors and biases that are present in its training data. Additionally, as it is a machine-based algorithm, it may not always be able to produce text that is as nuanced or emotionally resonant as that produced by a human.

1. Bias: The large amounts of text data they were trained on can include biases. This can result in the model generating text that is biased or discriminatory.

2. Transparency: OpenAI did not reveal the full working of its algorithm. This makes it a ‘black box’/ closed system. It is exceedingly difficult to challenge the decision, product of a closed system, without the full understanding of its working.

3. Misinformation: As ChatGPT is not able to verify the accuracy of the information it provides, there is a risk that it may generate text that contains misinformation or is factually incorrect.

4. Privacy and Cybersecurity: LLMs require large amounts of data to train and fine-tune, which raises concerns about the privacy of individuals whose data is used. If an organisation wants to ensure privacy, it needs to train its own model. This makes it an extremely expensive tool, because it needs a huge amount of computer power to store the data it was trained on. This makes it practically unavailable for small organisations.

5. Ethical concerns: The human-like quality of the generated texts raises ethical concerns about the potential for misuse, such as the generation of fake news, deepfake text, plagiarism, or impersonation. The New York City Board of Education found ChatGPT so impressive, that fearing that students might use it to write term papers, has blocked access to ChatGPT from school computers. Alternatively, schools can use tools similar to GPTZero that Edward Tian, a senior at Princeton University, developed. This tool is supposedly able to distinguish between human-written and machine-written texts.

6. Dependence: The use of ChatGPT is so quick and efficient that there is a risk that people may become too dependent on them and lose their ability to write, think critically, and make decisions.

7. Job displacement: As LLMs can automate certain tasks, such as writing and content creation, there is a concern that they may displace jobs that are currently done by humans.

Can ChatGPT really pass the Turing Test, meaning that it can be convincing enough for us to mistake its text with one written by a human being? This ‘mistake’ is also called the ‘ELIZA effect’, which describes the early ELIZA users’ misconception that they were actually talking to a human, not a machine. It did also happen to a Google engineer, Blake Lemoine, who was convinced that LaMDA (Google’s LLM) was sentient.

Who else is using the GPT-3 technology?

· Google has been also working on an experimental conversational AI service powered by LaMDA that they call ‘Bard’.

· Microsoft subsidiary GitHub and OpenAI also introduced Copilot, a tool that uses a new model based on GPT-3 called Codex, to help software developers write code more efficiently. Microsoft also launched its new AI-powered Microsoft Bing search engine and Edge browser with questionable success.

· Allen & Overy, a leading law firm, integrated ‘Harvey’, the AI platform, into its global practice to allow its employees to generate and access legal content in multiple languages.

· Mishcon de Reya, a London law firm, is hiring a ‘GPT Legal Prompt Engineer’ to help the firm “understand the legal practice use cases to which ChatGPT, GPT3 and other large language models could be applied”.

GPT-3 will power the next generation of apps — however, Sam Altman, the CEO of OpenAI believes that “AI is going to change the world, but GPT-3 is an early glimpse.” If you have tried ChatGPT, what did you think? It’s clear that businesses are interested in leveraging this technology, but the results are yet to be perfected. For now, I will not change my profession, as it’s unlikely that any GPT technology will be able to replace humans soon — but let’s discuss it again in five years.

**qLegal provides pro bono legal advice and legal education to start-ups and entrepreneurs on intellectual property, data protection, corporate and commercial law. See the qLegal website for more details and to book your appointment. Follow us on Twitter and LinkedIn for regular updates on issues relevant to your business. You can also listen to our podcast, subscribe to our newsletter, or access our free legal resources.

***************************************************************************

Bibliography

1. Kris McGuffie, Alex Newhouse. The Radicalization Risks of GPT-3 and Advanced Neural Language Models. https://arxiv.org/abs/2009.06807

2. M. Onat Topal, Anil Bas, Imke van Heerden. Exploring Transformers in Natural Language Generation: GPT, BERT, and XLNet. https://arxiv.org/abs/2102.08036

3. Adam Sobieszek & Tadeusz Price. Playing Games with Ais: The Limits of GPT-3 and Similar Large Language Models. https://link.springer.com/article/10.1007/s11023-022-09602-0

4. Margherita Gambini, Tiziano Fagni, Fabrizio Falchi, Maurizio Tesconi. On pushing DeepFake Tweet Detection capabilities to the limits. https://dl.acm.org/doi/abs/10.1145/3501247.3531560

5. Murray Shanahan. Talking About Large Language Models. https://arxiv.org/abs/2212.03551

6. Dieuwertje Luitse and Wiebke Denkena. The great Transformer: Examining the role of large language models in the political economy of AI. https://journals.sagepub.com/doi/pdf/10.1177/20539517211047734

7. Sally Weale. Lecturers urged to review assessments in UK amid concerns over new AI tool. https://www.theguardian.com/technology/2023/jan/13/end-of-the-essay-uk-lecturers-assessments-chatgpt-concerns-ai?CMP=Share_AndroidApp_Other

8. Caitlin Cassidy. College student claims app can detect essays written by chatbot ChatGPT. https://www.theguardian.com/technology/2023/jan/12/college-student-claims-app-can-detect-essays-written-by-chatbot-chatgpt

9. Arianna Johnson. Here’s What To Know About OpenAI’s ChatGPT — What It’s Disrupting And How To Use It. https://www.forbes.com/sites/ariannajohnson/2022/12/07/heres-what-to-know-about-openais-chatgpt-what-its-disrupting-and-how-to-use-it/?sh=3622f9872643

10. Gary Marcus. The Dark Risk of Large Language Models. https://www.wired.co.uk/article/artificial-intelligence-language

11. Tiffany Wertheimer. Blake Lemoine: Google fires engineer who said AI tech has feelings. https://www.bbc.co.uk/news/technology-62275326?s=09

12. https://web.njit.edu/~ronkowit/eliza.html

13. OpenAI blog: https://openai.com/blog/gpt-3-apps/

--

--

qLegal — Law clinic for entrepreneurs

We provide free legal advice and resources to tech start-ups & entrepreneurs in the UK, at Queen Mary University of London. @qLegal_ on Twitter and Instagram!