Francesca’s practice is chiefly focussed on the negligence liability of professionals and public authorities. These include the most complex clinical negligence cases, multi-million pound claims in respect of lawyers, architects and surveyors, social services claims and high-profile cases examining the extent of public authorities’ duties.
She is the author of Thomson Reuter’s Practice Note on lawyers’ scope of duty and has a particular interest in the nature and extent of the duty of care.
In 2020, Francesca was appointed to the Attorney-General’s Panel of Counsel, and in 2023, she was appointed a part-time Judge of the First-tier Tribunal, sitting in the Health and Social Care Chamber.
Sir Geoffrey Vos MR is one of our most senior judges. Recently, he has given a series of excellent speeches on professional negligence and AI. He describes a clear tension in how professionals approach the use of AI and the liability risks that this causes – mainly focussing on the issue of how the courts will approach a negligence claim where AI has been used. His maxi is “damned if you do, damned if you don’t” – adopt AI technology without care and caveat, and you will come to grief. A hesitancy to embrace it all though, is just as dangerous.
This applies to medicine as it does to all professionals, including lawyers and accountants. By now, we all know that AI is a powerful tool in the medical arsenal. Yes, it has the potential to hallucinate, or get things wrong. But where AI can help diagnose whether a skin defect is cancerous with 99% accuracy, doctors may be as much liable for using an available AI tool wrongly, as they might be liable for not using it at all.
A failure to use a tool that can be vastly more accurate than even the most experienced and skilled doctor is likely to fall below the standard required by the courts. If DERM (Deep Ensemble for Recognition of Malignancy), developed by Skin Analytics, analyses images to assess and triage skin lesions, potentially redirecting benign cases to non-urgent pathways and flagging suspicious lesions, it can and should be used widely. We are all familiar with the Bolam test: whether a doctor acted in accordance with a responsible body of professional opinion. If their actions were supported by that body of opinion, they are not considered negligent. If a body of opinion includes the use of AI tools, that becomes part of the required standard of care.
This is appreciated by those in management at the NHS. In 2024, the NHS Humber Health Partnership said its Flow initiative includes measures designed to streamline every stage of a patient’s progress from emergency department to discharge. AI software will be used to prepare X-ray reports and read blood test results, while bosses have pledged rapid assessments in emergency departments, more home-based treatments and virtual wards.
Many experts talk about the potential of artificial intelligence (AI) and machine learning to fundamentally improve disease research and overall health outcomes, but few people know that these technologies are already employed in more mundane, administrative parts of the health care system. In America, many hospitals, for example, employ AI-assisted predictive models to help them perform a range of tasks, including automating billing procedures and appointment scheduling. Anyone who had tried to re-arrange an appointment at an NHS hospital recently can only dream of the improvement that AI might bring!
In a world where medical negligence claims often arise from delayed diagnosis, or because a GP’s 10-minute appointment was insufficient to properly understand a patient’s problems, it is worth dwelling on how important and useful AI will be in speeding up these routine administrative elements of healthcare. Properly adoption and training on these types of AI tools is not just time-saving, it could be life-saving. I would be surprised if medical indemnifiers did not start seriously considering whether the use of these tools should be mandatory.
A recent study by the University of Minnesota School of Public Health (SPH) shows how hospitals in the U.S. are using AI-assisted predictive models. Approximately 65% of U.S. hospitals reported using AI-assisted predictive models. These models were most commonly deployed to predict inpatient health trajectories (92%), identify high-risk outpatients (79%), and facilitate scheduling (51%). While 61% of hospitals evaluated their predictive models for accuracy, only 44% conducted similar evaluations for bias.1
Bias matters. AI models are built on analysing huge data sets, which enable it to accurately diagnose or predict the course of disease or treatment. When those data sets are derived from clinical trials where sex difference has not been properly analysed, the AI that become less effective. It becomes less effective in the treatment of women, who are often unrepresented or under-represented in medical trials.
An example: experts said an algorithm developed using AI had enormous potential to improve patient care after a trial found it was more effective than current testing in ruling out heart attacks. Heart attacks are diagnosed based on the levels of troponin, a type of protein, in the blood. The new approach combines a patient’s troponin test results with other information and was able to rule out a heart attack in more than double the number of patients compared with the current approach, with 99.6 per cent accuracy. In Invisible Women, Caroline Criado-Perez complains that troponin levels differ by sex, and that if a dataset turns out to have been trained on male dominated data and its performance has not been sex-analysed, it is of much reduced value.
There are two important lessons from this observation.
The first is that where AI is to be deployed for diagnostic or predictive purposes, doctors need to be aware of its limitations. This means that AI should be used alongside more traditional methods in treatment, unless there is sufficient certainty about the reliability of the source material used to train the AI.
The second important point is the accurate recording of sex data. Although it is very important that patients are free to express their gender identity without fear of discrimination, it is imperative that accurate sex data is recorded by health professionals.2 A triage form that asks for gender is not of no use whatever. A review led by Alice Sullivan, a professor of sociology and research specialist at University College London, said the use of “gender” as catch-all term had begun being used in the 1990s and had become increasingly common, leading to what she termed “a widespread loss of data on sex”.3
My view is that inaccurate reporting and recording of sex data will create additional liability risks for medical professionals, especially as the use of AI becomes more widespread. It is trite to say that women’s pain and conditions which overwhelmingly affect women have been under-studied and remain a critical problem in healthcare. The vaginal mesh scandal is just one example where many millions of pounds have been paid out in compensation.4 All doctors should be concerned by the obfuscation created by inaccurate data recording, and even more so by the BMA’s Resident Doctors Committee for condemning the recent UK Supreme Court ruling that the legal definition of a woman in law is based on biological sex. That is backwards step when technology should be moving treatment forward.
It is not just sex which matters. Sara Khalid, Associate Professor of Health Informatics and Biomedical Data Science at NDORMS, explained: ‘Health inequity was highlighted during the COVID19 pandemic, where individuals from ethnically diverse backgrounds were disproportionately affected, but the issue is long-standing and multi-faceted.
‘Because AI-based healthcare technology depends on the data that is fed into it, a lack of representative data can lead to biased models that ultimately produce incorrect health assessments. Better data from real-world settings, such as the data we have collected, can lead to better technology and ultimately better health for all.’5
For doctors who are keen to avoid additional liability risks, the outlook is clear: AI is going to continue to improve and become an essential tool. Ignoring it or failing to use it in obvious cases such as in skin lesion diagnosis, may well become a negligent failing. Refusing to embrace AI as a way of managing administrative tasks and scheduling of appointments will lead to delay which may indirectly contribute to heighten liability risks. Doctors should be aware of bias and use AI with care and in a proper context and can themselves contribute towards better outcomes by accurately recording sex and ethnicity data.
References:
[1] See https://www.healthaffairs.org/doi/full/10.1377/hlthaff.2024.00842
[2] https://www.bmj.com/content/389/bmj.r797
[5] https://www.ox.ac.uk/news/2024-02-22-removing-bias-healthcare-ai-tools