Researchers Show How They Can Make AI More Relevant To Clinical Practice

In the study, new approach improved the sensitivity of the detection of diabetic retinopathy and age-related macular degeneration by 11.2+/-2.0% per image.

In a new publication, researchers from Radboudumc show how they can make the AI show how it’s working, as well as let it diagnose more like a doctor, thus making AI-systems more relevant to clinical practice.

Doctor vs AI

When it comes to AI, the two major differences compared to a human doctor are:

  • First, AI is often not transparent in how it’s analyzing the images
  • Second, these systems are quite “lazy”

AI looks at what is needed for a particular diagnosis, and then stops.

AI more like the doctor

To make AI systems more attractive for the clinical practice, Cristina González Gonzalo, PhD candidate at the A-eye Research and Diagnostic Image Analysis Group of Radboudumc, developed a two-sided innovation for diagnostic AI.

She did this based on eye scans, in which abnormalities of the retina occurred – specifically diabetic retinopathy and age-related macular degeneration. These abnormalities can be easily recognized by both a doctor and AI.

But they are also abnormalities that often occur in groups. A classic AI would diagnose one or a few spots and stop the analysis.

In the process developed by González Gonzalo however, the AI goes through the picture over and over again, learning to ignore the places it has already passed, thus discovering new ones. Moreover, the AI also shows which areas of the eye scan it deemed suspicious, therefore making the diagnostic process transparent.

An iterative process

A basic AI could come up with a diagnosis based on one assessment of the eye scan, and thanks to the first contribution by González Gonzalo, it can show how it arrived at that diagnosis.

This visual explanation shows that the system is indeed lazy – stopping the analysis after it as obtained just enough information to make a diagnosis. That’s why she also made the process iterative in an innovative way, forcing the AI to look harder and create more of a ‘complete picture’ that radiologists would have.

How did the system learn to look at the same eye scan with ‘fresh eyes’? The system ignored the familiar parts by digitally filling in the abnormalities already found using healthy tissue from around the abnormality. The results of all the assessment rounds are then added together and that produces the final diagnosis.

In the study, this approach improved the sensitivity of the detection of diabetic retinopathy and age-related macular degeneration by 11.2+/-2.0% per image. What this project proves is that it’s possible to have an AI system assess images more like a doctor, as well as make transparent how it’s doing it. This might help these systems become easier to trust and thus to be adopted by radiologists.

 

Facebook Comments