How we use AI
We use artificial intelligence consciously and responsibly – only where it delivers clear benefit and with appropriate safeguards in place.
AI is a tool, not a substitute for judgement, expertise, or human oversight, and we apply it only when it improves efficiency, clarity, or quality without compromising trust, safety, or institutional values.
Where we use AI, and why
We use AI pragmatically, as a support tool. Common applications within our work include:

Drafting and idea exploration
For example, generating first–draft outlines, summarising long documents, or exploring ways of phrasing a requirement. It saves time and surfaces possibilities whilst every output is reviewed and refined by a human before it’s used.

Assisting software development
AI coding tools do not replace engineering judgement. All code is written, reviewed, and tested by a human developer who is responsible every aspect. It helps us work more efficiently, but accountability for every technical decision remains with us.

Supporting accessibility and quality checks
AI can help show us inconsistencies or highlight potential accessibility issues early on in our work. It improves consistency and flagging of issues but human expertise determines final decisions and fixes.
Where we do not use AI
1.
Decisions that affect user outcomes without human validation.
2.
Processing or storing confidential client data without explicit consent.
3.
Generating final copy that is published without careful expert review.
4.
Design choices that require context, sensitivity, or institutional nuance.
If AI output cannot be verified, justified, or explained by a human expert, we do not rely on it.
Safeguards we apply
To ensure responsible use, we commit to the following principles:
-
Human oversight
AI is used as a support tool – every output is reviewed by a human who is accountable for the result and checking bias. -
Context sensitivity
We evaluate AI on a case-by-case basis by taking into account: client policy and risk tolerance; sensitivity of content; privacy and data protection requirements. -
Data privacy and security
We do not feed confidential client data into AI systems without clear consent, and we avoid using AI in ways that create unnecessary data exposure. -
Clear attribution
We only present content as human-authored after it has been reviewed and accepted by a person. Where AI has contributed to an internal draft or exploration, this is acknowledged if relevant.