A legal perspective on generative AI and ChatGPT

ChatGPT passes law exams but fails PSLE Math and Science

What an irony. CNN recently reported that ChatGPT passed law exams at the University of Minnesota, and then last week the Straits Times reported that it was terrible at the Singapore PSLE (Primary School Leaving Examination) Math and Science.

For those who are still in the dark, ChatGPT is hailed as the gold standard of all mainstream chatbots today. It is the celebrity of all generative/conversational AI apps, having captivated the world’s attention since making its grand entrance in November last year. (If you must know, it was devised by the firm OpenAI and is built on top of OpenAI’s GPT-3 family of large language models.)

Double-edged sword

I am all for technology, but let’s face it, not all shiny things are beneficial and for me this is where ChatGPT sits. Just because something takes the world by storm, does not mean it will chalk up more benefits than drawbacks in the long run.

In fact, a friend of mine asked ChatGPT what it thought about itself and it gave a surprisingly honest answer.

This was the prompt, “What happens when most date and information is generated by ChatGPT?”

“On the positive side, AI-generated content could help streamline and automate many tasks, making them faster and more efficient. It could also help to provide people with quick and easy access to information on a wide range of topics, from technical information to general knowledge.

On the negative side, relying too heavily on AI-generated content could lead to a lack of critical thinking and creativity. People may become overly reliant on AI-generated information and stop questioning or challenging it. Additionally, if AI-generated content is not properly regulated or monitored, it could lead to misinformation or biased information being propagated.”

There you have it. At least it is honest. Now let me share my top three concerns.

#1 Trading away your privacy

Consider the creation of legal documents. That’s obviously a serious matter. Personally, I interact regularly with a lot of attorneys that are keenly interested in using AI in the field of law. For example, a lawyer may be interested to use ChatGPT to compose, review or modify a contract.

Though this AI-using attorney might not realize it, any text of the contract, as placed as a prompt into ChatGPT, could potentially be absorbed by the AI app. It now is fodder for pattern matching and other computational intricacies of the AI app. If there is confidential data in the draft, that too is potentially now within the confines of ChatGPT.

So, in the interest of speeding up your work with new technologies, you may have unintentionally given away confidential information. Plus, you wouldn’t even be aware that you had done so. No flags were raised, there were no warning signs.

#2 Regressing on critical thinking and the subtlety of falsehoods

When you search for something on Google, you get to a results page with multiple links and sources. Before clicking into a page, you naturally make a judgment based on multiple factors including its relevance, reliability or authority on the matter. But, with ChatGPT, we ask a question and receive a couple of paragraphs with an answer which may or may not be correct, though it certainly presents itself as truth.

My point is, at the very least, search engines make it possible for users to probe their answers with a few clicks. Where is the facilitation of that on these up and coming platforms? Paragraphs generated from generative apps may contain various falsehoods or even blatant untruths. With zero warning signs, how do you even tell?

#3 an accomplice to crime

We all saw this coming. Business Insider wrote an article headlined, “It’s not just you: Cybercriminals are also using ChatGPT to make their jobs easier.” This is highly concerning, chatbots can counsel criminals and help them get better and faster at what they do.

Already, ChatGPT is speeding up the process of generating targeted phishing emails or malicious code for malware attacks. The question is how do we hold the right parties accountable? If such technology becomes an accomplice to crime, are the AI companies facilitating these crimes liable?

Love it or hate it

In the face of these questions, there is an urgent need globally to set parameters for AI and tech companies and locally to adapt regulations, policies and procedures. As individuals, more than ever, we need to prioritize critical thinking (don’t outsource everything to AI) or we’re going to end up in a collective muddle, more vulnerable than ever to manipulation and misinformation.