Can I use AI to write a complaint response?

With the increasingly complex tasks that AI purports to solve, it's understandable that many healthcare providers want to explore ways to reduce their administrative load. Responding to a patient complaint can be burdensome, especially with an ever-expanding to do list, and it is often hard to know where to start.
As such, some healthcare providers are turning to programs like ChatGPT to draft complaint responses for them. Below, we'll consider some risks involved in this approach.
AI and accuracy
The text generated by an AI system might sound plausible and eloquent and may even flow better than many could write, but therein lies its danger. Healthcare organisations should not be seduced by the confident and articulate words used into believing the content is correct.
Even if you use AI to generate a broad outline for your organisational response to a complaint, you will need to check the draft very carefully to avoid incorporating errors into the final version.
AI is known to 'hallucinate' - to generate text that purports to be factual yet has no factual basis. For example, there are reports of AI inaccurately citing legal precedents.
AI programmes may also use language or law from their country of origin (often the USA), rather than the UK. Examples include the use of the word 'plaintiff' rather than 'claimant', or saying 'contact our office' rather than 'practice' or 'surgery'.
Confidentiality
It goes without saying that including patient identifiable information in any prompt is inappropriate. That includes any information that could identify a patient when taken together. The medical history set out in a complaint response is likely to be unique to that patient.
The response to a complaint is often based on medical records that you cannot upload to an AI system. Even when data is anonymised, data protection legislation requires your organisation to tell patients how their data is processed.
Think about how you would respond if a patient challenged you... Would you feel comfortable admitting you had used AI to draft the response?
Many AI systems are based overseas and log the prompts and content generated. There are already strong data protection laws governing the use of sensitive personal data for research and transferring outside of the UK/EU.
NICE has also produced guidance for the adoption of AI in healthcare, which includes an 11-point check list and other resources.
Vague phrases and false apologies
AI-written content can often be deliberately vague so it's broadly applicable to a wide range of situations. This can lead to phrasing that doesn't directly address the issues being complained about.
If the complainant has specified how they have been affected, an apology that reads "We are sorry for any distress this has caused you" might seem insincere and may inflame the situation. Apologies need to be specific and genuine to resonate with complainants - such as, "I was sorry to read about your long hospital stay and the pain you experienced during your recovery."
We would also recommend avoiding so-called 'false apologies'. This refers to apologies that say things like, "I am sorry you feel your care was poor," or, "I am sorry that we did not meet your expectations."
A complainant might think you are suggesting that the problem is with their unrealistic expectations or their perceptions if phrases like these are used.
Obvious AI content
While humans are occasionally bad at being able to detect what is written by a human versus AI, there have been cases where sections of text from a letter have been reproduced verbatim when the recipient has asked AI to draft a similar letter.
Think about how you would respond if a patient challenged your organisation about this. Would you feel comfortable admitting you had used AI to draft the response? Would it indicate to the patient that you hadn't taken their complaint seriously?
Omitting information
Key aspects of a complaint response include things like an offer to meet, what organisational and individual reflection and learning has occurred, and the right to refer the matter to the PHSO.
None of these are likely to be included in an AI generated response unless it is specified in the prompt.
Reflection
Reflection on concerns raised is a necessary part of a complaint response and therefore outsourcing it to AI defeats the purpose.
Even if you ask AI to draft you a reflection on why one of your staff - for example - missed a case of appendicitis, it will provide you with a long list of all the reasons why a doctor may miss the diagnosis. This is no substitute for individualised reflection.
In summary
In the face of increased complaints and immense pressure on the health service, it's only natural to want to find ways to work smarter. AI may act as a prompt to get you started, but it is no substitute for the human touch when responding to complaints in a suitably authentic and reflective manner.
A letter mainly drafted by AI can undermine the authenticity of a genuine apology and reflection, and it's also not without risk to the clinician using it to write the response.
As an MDU Connect policy holder, you and your team have access to expert medico-legal and dento-legal guidance and support.
We encourage you to address issues early to pre-empt problems, so contact us for specific advice or explore our resources.
This page was correct at publication on 13th March 2025. Any guidance is intended as general guidance for members only. If you are a member and need specific advice relating to your own circumstances, please contact one of our advisers.