Generative AI tools such as ChatGPT and Copilot are already having a significant impact on the student experience across the university sector.  

These tools pose questions around how Imperial assesses students and ensures academic integrity, but they also present opportunities. For example, as a student you can enhance the way you study, while staff can spend more time on teaching than administrative tasks. Be reassured that we are committed to shaping the Imperial learning experience so that you feel prepared for an AI-augmented workplace and feel confident that we have assessed your skills and knowledge in an authentic yet robust way.

This introduction is intended as a short primer for generative AI, exploring some of the main points and areas relevant to student life. Whilst you do not need a detailed technical understanding of this technology to make use of it, some understanding helps you understand its strengths, weaknesses, and the issues to consider when using it during your studies.

This is a fast-changing topic. We aim to update this webpage regularly to take into account significant developments. Let's start by looking at various AI text generators, also known as Large Language Models (LLMs), and conclude with details regarding the responsible use of generative AI.

ChatGPT

ChatGPT has grabbed most of the headlines since its launch in November 2022. It was created by a company called OpenAI, which started as a not-for-profit research organisation (hence the name) but is now a fully commercial company with heavy investment from Microsoft. It is available as a free version, plus a premium version at £20 a month, which provides faster, more reliable access, as well as access to its latest language models and features, including plugins, which changes its behaviour significantly.

ChatGPT is based on a machine learning approach called ‘Transformers’, first proposed in 2017, and is pre-trained on large chunks of the internet, which gives it the ability to generate text in response to user prompts, hence the name ‘Generative Pre-trained Transformer’.  Whilst OpenAI provided some information on the approach for training ChatGPT, they have not so far released any information about GPT4, the latest model released in early 2023.

In its standard mode, without plugins, ChatGPT works by predicting the next word given a sequence of words. This is important to understand, as it is not in any sense understanding your question and then searching for a result and has no concept of whether the text it is producing is correct.  This leads it to be prone to producing plausible untruths or, as they are often known, hallucinations.

As it stands today, the free version of ChatGPT does not have access to the internet, so cannot answer questions beyond its training data cut-off date of September 2021. Users paying for the ChatGPT Plus service have access to a version that can access the internet.

ChatGPT Plus customers also have access to plugins which extend ChatGPT’s functionality. For example, a Wolfram plugin allows users to ask questions which are answered by Wolfram Alpha, which excels at mathematical and scientific information. Initial testing suggests this might resolve the issue of ‘hallucination’ in these domains. Many other plugins are available, and more are being developed.

OpenAI makes its service available to other developers, so many other applications make use of it, including many writing tools such as Jasper and Writesonic, as well as chatbots in popular applications such as Snapchat.

Microsoft’s Copilot, Google’s Gemini, Meta’s Llama and Anthropic’s Claude

Although ChatGPT has received most of the attention, there are other developers in this space, and this number is likely to increase. The developers of major AI services such as OpenAI make their services available to other developers. One of the most significant announcements has been from Microsoft, who are incorporating generative AI across Microsoft 365 tools under the name ‘Copilot’.  All staff and students already have access to the platform. You can access additional guidance by visiting ICT's walkthrough webpage.

Google has made similar announcements about their office tools, and a number of their Google Workspace generative AI tools are available to try. Gemini is Google’s ChatGPT equivalent and is available for testing. Like Copilot, it can access the internet, but unlike Copilot, it does not provide references for the sites it has used to give its answers.

Claude is similar to ChatGPT and is produced by Anthropic, and is likely to be built into many applications going forward.

Meta’s Llama is slightly different in that it has been made available as an open-source model, meaning that you can run it yourself.  Open-source AI models often differ from open-source software though, and it is not possible to fully understand how the Llama model works, or modify it yourself, from this release.

A summary of key capabilities, limitations, and concerns around Large Language Models

In considering generative AI, it is important not only to understand its capabilities but also its limitations.  Some of the key themes are summarised here:

Capabilities Limitations Concerns
  • It can write plausible sounding text on any topic.
  • It can generate answers to a range of questions, including coding, maths-type problems and multiple choice.
  • It is getting increasingly accurate and sophisticated with each release.
  • It generates unique text each time you use it.
  • It is great at other tasks like text summarisation.

 

  • It can generate plausible but incorrect information.
  • ChatGPT is only trained on information up until Sept 2021 (but those with the paid ChatGPT Plus service have access to a version that can access the internet, and has a slightly later training date)
  • Limited ability to explain the sources of information for its responses (this varies between chat bots)

 

  • It can and does produce biased output (culturally, politically etc.)
  • It can generate unacceptable output.
  • It has a high environmental impact, concerns around human impact and ownership of training material.
  • Security and privacy concerns around the way users’ data is used to train the models.
  • There is a risk of digital inequity.

 

Image Generation

It is not all about text – image generation tools have made huge progress too, particularly with Midjourney, DALL-E and Stable Diffusion.

These work in a similar way to text generators – the user gives a prompt and one or more variations of images are produced. Image generation capabilities are being incorporated into general AI services, so Copilot, for example, can also generate images, using OpenAI’s DALL-E.

Approach to misuse of AI and plagiarism detection tools

Academic integrity is at the heart of all we do at Imperial. Submitting work and assessments created by someone or something else, as if it was your own, is plagiarism and is a form of cheating and this includes AI-generated content. Please refer to the university’s Academic Misconduct Procedures for further information.  To ensure quality assurance is maintained, your department may choose to invite a random selection of students to an ‘authenticity interview’ on their submitted assessments. This means asking students to attend an oral examination on their submitted work to ensure its authenticity, by asking them about the subject or how they approached their assignment. Being invited to an authenticity interview does not mean that there is any specific concern that you have submitted work that is not your own. 

At this time, we do not intend to deploy any additional AI detection functionality due to concerns regarding the maturity of these products and their ability to accurately identify incidents of students utilising AI  outside the parameters of what has been agreed for their programme.
 
Our current approach, in line with many other universities in the UK, is to train our staff to understand AI, identify its various uses, set parameters for those uses within students’ programmes, and be alert to the common features of AI-generated work. In turn, as a student you should expect to receive support from Imperial to help you proactively stay informed about the latest capabilities of AI platforms.

This approach is not prejudicial to Imperial deciding to review this decision in future, should we and the wider university sector have greater confidence in any technological solutions which may become available to detect the misuse of AI.