We are committed to meeting the needs of our users, many of whom increasingly prefer to use conversational search tools. We also have an obligation to use the technology responsibly.
How it works
Our conversational search tool works by attempting to determine the meaning and intent behind a user’s question and searching for relevant and approved content. It then summarises the information found to produce a direct answer to the user’s question.
LLMs are used to determine the intent of the question, generate an answer, and finally to check that the answer is faithful to the search results using a Guardian Agent pattern. This helps prevent 'AI hallucination'.
Conversational search results carry an active recommendation to check the sources searched to create the answer. These sources are clearly indicated within the body of the response.
Training data
Only content approved by the university is used to train our conversational search. This is limited to information that is already publicly available on the university website, and which has been chosen and audited by those responsible for ensuring prospective students receive accurate and useful information about studying at Greenwich.
Search terms and website content are not used to train base LLM models or used outside the university's cloud infrastructure environment in any way.
Personal data and retention
Conversational search data is encrypted. In line with our policy on retaining search data, questions submitted to conversational search are retained for 12 months before being deleted. Access to this information is managed by the university and restricted to staff and technical partners with a clear and identifiable need (e.g. improving search results).
The cloud platform we use for our search and website has passed a Privacy Impact Assessment (PIA) to ensure it complies with all of our policies, and the legal requirements around, retention of personal data.
Scope and testing
We limit conversational search to topics where we feel it can have the most impact, and it is trained on specific content relevant to these topics. We always balance this against the risk of providing incomplete or mis-summarised information. If a question lies outside the scope of these topics, our conversational search will reply that it is unable to answer that question.
Before adding a new topic, we undertake user testing to ensure that answers are as accurate as possible, and do not misinform.
Further information
You can find wider information about the university's use of AI tools by visiting the AI guidance section of our website.