Nearly half of all instances of radiation therapy (RT) for cancer target the most prevalent malignancy in the world—lung cancer. Even highly skilled doctors differ in their assessments of how much tissue to target with radiation during RT planning, which is a labor-intensive, manual procedure that can take days to weeks to complete.
Additionally, as cancer incidence rates rise, a global lack of radiation-oncology specialists and facilities is anticipated to worsen. Researchers and collaborators from Brigham and Women’s Hospital have created and affirmed a deep learning algorithm that can quickly recognise and demarcate or segment a non-small cell lung cancer (NSCLC) tumour on a computed tomography (CT) scan. This algorithm is part of Mass General Brigham’s Artificial Intelligence in Medicine Program. In addition, their study, which was published in the journal Lancet Digital Health, shows that radiation oncologists in simulated clinics who used the algorithm performed as well as doctors who did not use it while completing their tasks 65% more quickly.
The failure to research how to utilise AI to better human clinicians and vice versa, according to corresponding author Raymond Mak, MD, of the Brigham’s Department of Radiation Oncology, is the greatest translation gap in AI applications to medicine. They are researching how to create partnerships and collaborations between humans and artificial intelligence that improve patient outcomes. Better consistency in tumour segmentation and quicker treatment periods are two advantages of this method for patients. The benefits to clinicians include less tedious but challenging tech work, which can lessen burnout and enhance patient contact time.
To educate their model to recognise cancers from different tissues, the researchers analysed CT data from 787 patients. They used scans from more than 1,300 patients in increasingly external databases to test the algorithm’s performance. Radiation oncologists and data scientists worked closely together to develop and validate the system. For instance, the researchers reprogrammed the model with more of these images after noticing that the algorithm was wrongly segmenting CT scans, including lymph nodes. This improved the model’s performance.
Eight radiation oncologists were then asked to execute segmentation tasks, rate, and alter subareas created by either a different expert physician or the algorithm. Performance between human-AI partnerships and human-produced segmentations did not significantly differ. Unaware of which segmentation they were altering, physicians edited an AI-generated segmentation 65% faster and with 32% less variation than a human-produced segmentation. In this blinded investigation, they also gave segmentations created by AI a higher quality rating than those created by human experts.
In the future, the researchers intend to integrate their research with AI models they developed in the past that can recognise organs at risk of absorbing undesirable radioactivity during cancer treatment, such as the heart, and afterwards exclude them from radiotherapy. To make sure that AI collaborations benefit clinical practise rather than hurt it, researchers are continuing to examine how doctors interact with AI. They are also creating a separate, unbiased segmentation method that can validate both segmentations created by humans and AI. PhD, of the Department of Radiation Oncology, Hugo Aerts, who is a co-author, said that this study introduces a fresh evaluation technique for AI models that stresses the value of human-AI teamwork. Because in silico examinations can produce different outcomes from clinical evaluations, this is very important. This strategy could help clear the path for clinical deployment.