AI vs Human Expertise in Medical Diagnosis

Artificial intelligence is on a par with human experts in the specialism of medical diagnosis, a statement that has been boldly concluded from reports unearthed late last month. This comes after the UK government invested £250m worth of funding into the NHS’ new artificial intelligence lab, under the new initiative coined ‘NHS-X’, and confirms that the advent of artificial intelligence into clinical scenarios has caused a great deal of speculation, allowing a greater efficiency of resources and time, and aiding clinical fellows in their duties – even allowing the doctor-patient relationship to proliferate we saw with Eric Topol’s recent remarks over his NHS review as we give more time to doctors to work with their patients and substantially improving healthcare. Not only this, but AI could be fundamental in providing personalized healthcare as algorithms could detect, and account for, the variations each human has and how that could play a part in their treatment.

 

An example of robotics in healthcare, largely fulled by artificial intelligence. Credit: Seeker

An example of robotics in healthcare, largely fulled by artificial intelligence. Credit: Seeker

An area of great interest for the implementation is the application of artificial intelligence in medical imaging, as we saw in my recent articles with NVIDIA Clara and various other programs. This is driven by deep learning, the up and coming subset of machine learning that is optimized to collate features from data sets to spot repetitive trends that could be indicative of disease, and as such forming an algorithmic system that could very well exceed human ability, again as we saw with Moorfield’s OCT AI system a few months back. However, experts are still not moved over and warn that the aforementioned review was limited in its quality and may not be as representative as the authors had presented. Although not a black and white result, it seems that the first wholesome review of the ability of deep learning systems stacked against human skill has been published in The Lancet.  The authors, Professor Alastair Denniston and Dr Xiaoxuan, both of the University Hospitals Birmingham NHS Foundation Trust, noted the abilities of AI, but also warned that the studies may not match the exceptional expectations that some individuals may have for AI, writing in the Lancet journal (Lancet Digital Health), the two described how they utilized papers released since 2012, supposedly the early stages of AI’s input into healthcare. Dr Xiaoxuan stated, “There are a lot of headlines about AI outperforming humans, but our message is that it can at best be equivalent,” – a potent reminder to us all that AI is still a new development and may need far more time or training until it can meet the bold claims made throughout the industry. The authors’ initial turnout was 20,000 studies that were clinically relevant, but only 0.07% of these studies were of adequate data quality, tested on a different dataset than the training set, and used the same set of images to assess humans (to keep it fairly valid). From these studies, the systems used reported disease 87% of the time as opposed to humans who achieved 86%, and correctly cleared a patient 93% of the time as opposed to humans who achieved 91%, representing a small victory in the way of AI, although this claim is void and empty until more studies are carried out with higher levels of reliability.

An optometrist in action, although his job could get easier very soon! Credit: Evening Standard

An optometrist in action, although his job could get easier very soon! Credit: Evening Standard

However, an issue exists in these statistics still due to the fact that the healthcare professionals were not fed prior information about the patients (as would be made available to clinical notes and documentation, which are reviewed by a doctor in making diagnoses and predicting the progression of a patient). Professor David Spiegehalter, of the University of Cambridge, also commented on the poor quality of research in the sector (as we saw with the study reporting only 0.07% of studies were useful clinically), commenting, “This excellent review demonstrates that the massive hype over AI in medicine obscures the lamentable quality of almost all evaluation studies,” he said. “Deep learning can be a powerful and impressive technique, but clinicians and commissioners should be asking the crucial question: what does it actually add to clinical practice?”. Another Cambridge-based expert, Dr Raj Jena, an oncologist at Addenbrooke’s in Cambridge, commented that these systems could be vital in the future as we cope with a growing strain on the NHS, they are also subject to requiring a great deal of vivid testing, and to understand the faults in the system’s failure, addressing them before clinical implementation can be universally endorsed.


A big thank you to all mentioned bodies and individuals, namely Alastair Denniston, Dr Xiaoxuan Liu, Dr Raj Jena and Professor David Spiegelhalter for their contributions to my article, I wish you both the very best in your research prospects and congratulate you on your findings and efforts made to this date.

Article thumbnail credit: AI Provides Doctors with Diagnostic Advice: How Will AI Change Future Medical Care?, Fujitsu Journal [click here for page]. Note that this article has not contributed to any of the written content of my article, and that you click the above link at your own discretion as the page has not been checked by our team. Thank you to our readers for your continued support, and to anyone who contributes to our fantastic team to keep our services in orderly fashion.