Large Language Models in Education

AI in Teaching and Learning series

Part 1 Part 2 Part 3

What appears to be a network diagram with a lot of information nodes on it

Image by Stable Diffusion (another tool for generative content)

What is a large language model?

A large language model (LLM) is a predictive text generator that has been trained on a very large dataset. For a really in-depth description of what that means, see this post by Stephen Wolfram. For a much more accessible read, Prof Toby Walsh has a good article in The Conversation. It functions much like predictive text on your phone, but instead of taking the previous word it will take much larger blocks of text (or even whole documents).

Alternatively, can we just get ChatGPT (itself an LLM) to describe what it is?

> Hey there. Can you please describe what a large language model is?

Sure, I'd be happy to explain what a large language model is!

A large language model is an artificial intelligence system designed to process and generate human language. It is essentially a machine learning model that has been trained on vast amounts of text data, such as books, articles, and web pages, with the goal of learning to understand the nuances of natural language.

The basic idea behind a large language model is to use a neural network, a type of artificial intelligence architecture inspired by the structure of the human brain, to process and analyze large amounts of text data. The neural network is trained using a process known as unsupervised learning, in which the model is given a large corpus of text data and left to analyze and learn from that data on its own.

The result of this training process is a model that is able to generate human-like text in response to a given prompt or input. Large language models like GPT-3 (Generative Pre-trained Transformer 3) have millions or even billions of parameters, allowing them to capture complex patterns and relationships in language and generate highly realistic text.

Large language models have a wide range of applications, from language translation and chatbots to content generation and even creative writing. They represent a major breakthrough in the field of natural language processing and have the potential to revolutionize the way we interact with technology and each other.


Well, that was easy, wasn't it? (Note that I can't help myself but to be polite to our robot overlords this tool, starting with a greeting and including 'please'. Seriously though, do avoid this - it often leads to less clarity and reliability in the output from the LLM as it might treat 'please' as a keyword rather than you being polite. Remember, it's a tool not a person!)

So... now what? Is it useful? Can it help me? The answer is of course, 'It depends'. If you are suffering writer's block, it can be a great method for generating some starting points. Let's see an example:

> Give me 3 scenarios in which I might use a large language model, without it impacting academic integrity or intellectual property in my workplace.

Sure, here are three scenarios where you could use a large language model without any concerns about academic integrity or intellectual property:

  1. Creative writing: If you are a writer looking to generate ideas or inspiration for your next project, you could use a large language model to help you come up with new ideas or generate text in the style of a particular author or genre.
  2. Personal productivity: If you are looking to automate repetitive tasks or streamline your workflow, you could use a large language model to help you generate text for emails, social media posts, or other communications. For example, you could use a language model to generate responses to common customer inquiries, saving you time and effort.
  3. Language learning: If you are learning a new language, you could use a large language model to help you practice your skills. For example, you could input a sentence in your target language and ask the model to generate a response, giving you an opportunity to practice your comprehension and grammar skills.

It's worth noting that while large language models are incredibly powerful tools, they are not always 100% accurate and can sometimes produce errors or biased results. As with any technology, it's important to use them critically and with a grain of salt, and to be aware of their limitations and potential biases.


Great! Glad that the model agrees on the ideas generation, I loathe staring at a blank page.

What if I want to make it a little more personal to the University? Let's refine our query a little, and see if we can make it more useful locally:

> Could you frame that for an employee at a university?

Sure, here's a framing that might be more relevant for an employee at a university:

  1. Academic research: If you are conducting research in the field of natural language processing, you could use a large language model to test out new algorithms or models, or to generate synthetic text data for use in experiments.
  2. Curriculum development: If you are designing a course on language or literature, you could use a large language model to help you analyze and generate examples of text in the target language or genre, or to help students practice their language skills.
  3. Student support: If you are working in a student support role, you could use a large language model to help students improve their writing skills or to provide feedback on their assignments. For example, you could use a language model to generate suggestions for improving sentence structure or grammar, or to provide examples of well-written essays in the target language or style.

Again, it's important to note that while large language models can be valuable tools in these scenarios, it's important to use them critically and to be aware of their limitations and potential biases. Additionally, it's important to follow the ethical guidelines set by your institution and to ensure that your use of these tools does not violate any academic integrity or intellectual property policies.


This is often referred to as prompt crafting or prompt engineering - refining your query to become more targeted to generating a response that will be useful for you, which we will cover more in the next page.

Some great further reading on this can be found in this TechDirt post  and another post from Ethan Mollick's blog, and there are even Coursera courses available.

Where to from here?

In the next section, we'll discuss what LLMs can and can't do, and later we'll talk about the specific case of AI (as LLMs) in education.

AI in Teaching and Learning series

Part 1 Part 2 Part 3

Written by Jairus Bowne, Academic Specialist in the School of Biomedical Science's Digital Learning Hub and ChatGPT.