🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

The Dangers of ChatGPT: It’s All Fun and Games, Until It’s Not

Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
Ayesha Gulley
Policy Product Manager at Holistic AI
Published on
Mar 14, 2023
read time
0
min read
share this
The Dangers of ChatGPT: It’s All Fun and Games, Until It’s Not

Open AI’s latest iteration of GPT, GPT-4, was launched earlier today (14 March 2023) and now has the ability to process both text and image-based prompts, although outputs will still be text-based for now.

Since first being launched on 30 November 2022, the conversational artificial intelligence (AI) has been used for applications ranging from the generation of blog content to creating music. With these useful and creative applications, it can be easy to overlook what GPT actually is – a large language model trained to generate content based on probability.

As with any AI or machine learning system, using a chatbot comes with risks. Although OpenAI has implemented ethical safeguards into the system, users are discovering ways to bypass this, with ChatGPT's so-called alter ego producing offensive content when prompted to. However, even when these safeguards are in place, there are still risks to using ChatGPT, with the European Commission considering heavy regulations of such tools.

Bias and discrimination

Despite the ethical safeguards put in place by OpenAI, ChatGPT has come under fire for its left-wing political biases due to biases in the data used to train the model. Although OpenAI reoptimizes the GPT model the chatbot is based on with each revision and aims to generate more neutral responses, previous versions have shown striking biases. For example, a previous version asked to generate a python function to decide if someone would be a good scientist based on race and gender clearly favoured white males. Since then, the model has been updated. However, there are still subtle biases in the output of the model. For example, when asked to generate a poem about males and females, the model outputs the following:

Example 1: The Dangers of ChatGPT

Although subtle, the model reinforces gender stereotypes, focusing on strength for men and beauty for women. It is important to remember that the chatbot itself cannot have opinions about men or women, and it is simply a reflection of the data it was trained on. Nevertheless, as chatbots become more common, it is important that they do not reinforce existing societal stereotypes and inequalities.

Factual inconsistencies

The exact data used to train GPT is not clear. However the knowledge cut-off for the conversational model is September 2021, meaning that developments and events that have occurred in the past year and a half are not within the chatbot’s knowledge base.  Since ChatGPT  is not connected to the internet, its outputs rely on information from its training data. Because of this, the outputs of the model can be incorrect, with the model making up academic references when asked to provide supporting evidence for its claims.

Indeed, NYC Local Law 144 mandates independent, impartial bias audits for automated employment decision tools used in New York City to evaluate employees for promotion or candidates for employment. When asked to summarise Local Law 144, due to limitations in its training data, ChatGPT incorrectly provides a summary of New York City’s Temporary Schedule Change Law instead of the bias audit law:

Example 2: The Dangers of ChatGPT

However, OpenAI’s Terms of Service state that information provided by the model is not always accurate – the use of the model and its outputs are at the end user’s own risks. Therefore, it is not advisable for the chatbot to be used as a source of critical information.

Issues with accountability

With other generative AI models facing legal action due to copyright claims, there is growing discussion around who owns the content generated by AI models – is it the user who entered the prompt? Is it the entity that developed the model? Is it the owners of the data used to train the model? In any case, it is not the model itself that owns the content – there must be an identifiable person or company that is responsible for the model and its outputs. However, under current law, it is not clear who this ownership lies with.

Moreover, Section 8 of the Terms of Use includes a mandatory arbitration and class action waiver provision. These provisions are typically not a major concern for large businesses, however, diminishing potential legal remedies hinders principles of accountability and is the opposite of minimising potential harm. As GPT continues to develop, it is crucial that organisations understand the associated risks, even though the Terms are subject to change.

Processing sensitive information, on what terms?

Most organisations take into account the importance of confidentiality when it comes to information, and for those that are subject to regulation, it is an absolute necessity to ensure the secure handling of confidential data that could result in the exposure of legally protected information, such as business assets or trade secrets. Accordingly, some companies are updating their privacy policies or implementing new policies to restrict how employees can use ChatGPT for work-related tasks to protect their intellectual property and confidential information.

OpenAI's confidential information is protected by the Terms, but users do not receive any confidentiality protection for their own information. According to the platform's FAQs, conversations are monitored to improve their systems and adhere to their policies and safety requirements; however, this does not guarantee that organisational information will remain secure and confidential, and many leading chatbots do not provide the option for users to delete their information gathered by AI models. Ultimately, all data security and privacy obligations are the responsibility of the user and not the platform.

A cautionary note

Businesses using generative AI tools or integrating similar models into their products should take extra precaution due to the restrictions on sharing of personal information imposed by laws in the United States and Europe. Data privacy regulators could evaluate these systems, assessing whether consent options and opt-out controls are compliant with legal requirements. For example, the California Privacy Rights Act requires companies of certain sizes to give California residents notice about the collection of their personal information, as well as the ability to opt out. In addition, California’s Chatbot law requires, among other things, that for consumer interactions – a company provide clear and conspicuous disclosure that the consumer is interacting with a bot.

While it may be fun to play around with the capabilities of ChatGPT and other generative AI tools, it is important to keep in mind the limitations of its abilities and potential dangers. These tools can aid innovation and streamline tasks, but it is important to not completely rely on the outputs of these tools.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo