🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

The Rise of Large Language Models: Galactica, ChatGPT, and Bard

Authored by
Kleyton da Costa
Machine Learning Researcher at Holistic AI
Published on
Feb 17, 2023
read time
0
min read
share this
The Rise of Large Language Models: Galactica, ChatGPT, and Bard

The use of large language models (LLMs) such as Galactica, ChatGPT, and BARD have seen significant growth over the past few months. These models are becoming increasingly popular and are being integrated into various aspects of daily life, ranging from grocery lists to helping to write Python code. As with any novel technology, it is essential for society to understand the limitations, possibilities, biases, and regulatory issues brought about by these tools.

November 2022 was a significant month for LLMs. Two projects that placed AI in the spotlight were introduced: Galactica (Papers with Code and Meta AI), and ChatGPT (OpenAI).

Galactica

Galactica was trained mainly using tens of millions of scientific articles, textbooks, and reference materials, with the aim of supporting scientific research, such as generating literature summaries, organizing references, and answering scientific questions.

ChatGPT

ChatGPT does not have the size and characteristics of its data set available, but it is known that billions of words were used.

ChatGPT can be considered a more generalist model, allowing it to generate text for blogs, scientific documents, code in various programming languages, translation, and text revision, among other things. Quickly, the model gained millions of users and became one of the trends in AI, largely due to its multiple functionalities.

Bard

In February 2023, Google AI launched a LLM project called Bard. The model aims to use internet data to generate updated and high-quality results for the user's queries. With a goal to also be more versatile, Bard has the potential to enhance the productivity of the most widely used search engine on the internet. However, an error was noticed in the output produced by the model during a demonstration, leading to a decline of roughly 8% in Alphabet's (Google's parent company) stocks. For a company the size of Alphabet, this decline resulted in a loss of 100 billion dollars in market capitalisation.

Released Company Scope Based Model Number of Users
Galactica November 2022 Meta AI, Papers with Code Scientific Tasks Galactica (1) Not Available
ChatGPT November 2022 Open AI General Tasks InstructGPT (2) 100 million
Bard February 2023 Google AI General Tasks LaMDA (3) Potentially 1 billion (number of Google users)

The risks associated with using this type of technology are becoming clearer. The first limitation already observed is in the biases and errors that the results can present. For example, it is necessary not to assume the answers generated by the model as true since models can reproduce inaccurate or incorrect information present in the training data. Another limitation is the training period of models. Chat-GPT, for example, is trained on data from 2021 and can’t return accurate outputs for current news or historical facts occurred after this trained period.

We are witnessing significant technological advancements that are being adopted across various fields of knowledge and society. It is essential that society comprehends the workings of LLMs and that governance mechanisms are put in place to address the associated risks, such as explainability, bias, privacy, and more.

Potential regulatory issues

Who is responsible for the answers generated using LLMs?

The responsibility for the responses generated by LLMs is an important point for potential regulatory actions and can directly influence their use and improve the quality of responses. Initially, platforms exempt themselves from responsibility for the responses generated by their models, making users responsible for them. Models are trained with a huge amount of data that are available on the internet and texts that eventually are part of some scientific work published or copyrighted work. In this sense, regulation will need to be very clear about who is responsible for the use of responses since they, for example, may be the result of plagiarism.

How can we measure bias and discrimination in LLMs?

Measuring/mitigating bias and discrimination in LLMs is a complex task, considering that these models are trained with a huge dataset that may reproduce biases observed in society. Thus, the regulation of LLMs can be performed through a process of data representativeness audit, i.e., evaluating whether the data used to train the model are representative of the diversity observed in society (such as in the case of credit risk classification systems, facial recognition systems, etc.). Additionally, models can be evaluated through bias metrics such as equalised odds (measure if the false positive and false negative rates are equal across different groups, such as genders or races). Another strategy that can be applied is human evaluations. Through this type of strategy, the results of the models should be exposed to human evaluators to determine if the results generate any type of bias or discrimination (this is a costly method, but it can serve to observe discriminatory nuances in the results).

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo