This Science News Wire page contains a press release issued by an organization and is provided to you "as is" with little or no review from Science X staff.

Developing novel deep learning technologies for medical image classification

October 26th, 2023 Michelle Revels
Developing novel deep learning technologies for medical image classification
Dr. Tianbao Yang. Credit: Texas A&M Engineering

Deep learning technologies can assist in medical image classification, such as helping identify variations of brain diseases or cancers based on CT scans. However, traditional deep-learning approaches are challenging to interpret and often require significant amounts of annotated data. Additionally, handling high-resolution medical images is challenging due to limited computer and memory resources.

Dr. Tianbao Yang, associate professor for the Department of Computer Science and Engineering at Texas A&M University, recently received more than $1 million from the National Science Foundation to develop deep learning technologies for medical image classification by leveraging both the images and associated free-text reports of patients for self-supervised learning. Self-supervised learning is a new machine learning model that enables machines to learn from unprecedented, unlabeled data without human supervision, with the potential to dramatically reduce the costs of human labeling. In addition, his algorithm can make deep models interpretable and improve the training by sampling from the multiple CT scans of a patient for computation.

"Since reading many slices, or images, of a CT scan is time-consuming and costly, we expect our system to be able to quickly flag slices with critical findings, bringing early attention to important data that can save time during diagnostics for radiologists," said Yang. "Our system could also provide a second opinion with radiologists' style interpretations for less experienced residents or trainees."

Yang turned to the area under the receiver operating characteristic curve (AUC) Maximization to accomplish this task. Deep AUC maximization is a technique that helps imbalanced data classification. It is a metric for measuring the performance of a classifier, which has been widely used in medicine and medical fields.

In 2020, Yang developed a large-scale optimization algorithm for learning deep neural networks by maximizing AUC directly, which achieved first place at the Stanford CheXpert Competition. Leading the effort to create deep AUC maximization, his team has also developed an open-source library, LibAUC, which has been downloaded more than 41,000 times.

By using self-supervised deep AUC maximization and the database, the algorithm can identify differences and abnormalities with limited or no annotated notes, surpassing this step in the diagnostic process. Additionally, many existing approaches for handling numerous high-resolution CT scans reduce the image resolution to alleviate computational costs. In contrast, this approach addresses the issue through multi-instance learning and leverages advanced optimization techniques to sample instances for computing without compromising the predictive performance. Lastly, the self-supervised learning algorithms based on deep AUC maximization are better suited for handling imbalanced data, a common scenario in the medical domain.

In addition to deep AUC maximization, Yang and the team will use theoretical expertise to develop improved optimization algorithms for self-supervised learning. This research differs from existing approaches because ad hoc solutions for self-supervised learning require substantial computing resources, which are not readily available. In contrast, these algorithms will be transformative because they do not require many resources and have provable convergence guarantees.

"Previously, we have developed efficient and effective algorithms for contrastive self-supervised learning, which do not require large computer resources and are more effective than Google's and OpenAI's solutions," said Yang. "We expect to leverage deep AUC maximization and LibAUC library for self-supervised learning to improve the performance further."

Yang and the team believe that their deep learning techniques will effectively and efficiently reduce the training costs of artificial intelligence by reducing memory requirements and improving generalization performance for medical image classification. They also hope their research can be implemented beyond this field.

"This foundational research will likely impact other applications, such as molecular property prediction for drug discovery," said Yang. "The use-inspired research on interpretable AI and large-scale self-supervised learning for medical image classification will enable the accelerated transition of Deep AUC maximization into many benefits for society."

Yang will collaborate with Dr. Tong Wang, a professor at Yale School of Management, who is an expert on interpretable AI. Together, they will develop a new interpretable self-supervised learning framework. Additionally, Dr. Girish Bathla, a Mayo Clinic radiologist, will help conduct use-inspired research on classifying CT images of the human brain.

Provided by Texas A&M University College of Engineering

Citation: Developing novel deep learning technologies for medical image classification (2023, October 26) retrieved 28 November 2024 from https://sciencex.com/wire-news/459773767/developing-novel-deep-learning-technologies-for-medical-image-cl.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.