Development of a Quantum Neural Network-Based Lung Cancer Classification Model Using Synthetic CT scans and Quantitative Comparison with CNN Architectures
ORAL
Abstract
Quantum Neural Networks (QNN) leverage quantum properties such as entanglement and superposition to potentially overcome the limitations of classical neural networks. This study explores the practical applicability of QNN by analyzing and optimizing their circuit architecture using synthetic lung cancer CT images.
We examined how design factors like ansatz structure, data encoding method, and gate types affect classification accuracy. As image resolution increased from 28×28 to 128×128, accuracy improved from 95% to 99%. However, there was a performance drop when the resolution was 256×256. This suggests structural bottlenecks, potentially caused by limited circuit expressibility or training instability such as barren plateaus.
To validate the effectiveness of our QNN-based hybrid model, we compared its performance against CNN models such as ResNet50, EfficientNet, and MobileNet. The hybrid model achieved classification accuracy that was comparable to these CNNs. Also, the model remained lightweight, with only about 17,500 trainable parameters in total while ResNet50, EfficientNet-B0, and MobileNetV2 contain 25.6M, 5.3M, and 3.5M parameters respectively. The proposed model demonstrated fast inference time, which was below 0.01 seconds per sample, making it well suited for real-time medical imaging applications. These results highlight the model’s potential as a resource-efficient and practical solution for high-dimensional tasks.
We examined how design factors like ansatz structure, data encoding method, and gate types affect classification accuracy. As image resolution increased from 28×28 to 128×128, accuracy improved from 95% to 99%. However, there was a performance drop when the resolution was 256×256. This suggests structural bottlenecks, potentially caused by limited circuit expressibility or training instability such as barren plateaus.
To validate the effectiveness of our QNN-based hybrid model, we compared its performance against CNN models such as ResNet50, EfficientNet, and MobileNet. The hybrid model achieved classification accuracy that was comparable to these CNNs. Also, the model remained lightweight, with only about 17,500 trainable parameters in total while ResNet50, EfficientNet-B0, and MobileNetV2 contain 25.6M, 5.3M, and 3.5M parameters respectively. The proposed model demonstrated fast inference time, which was below 0.01 seconds per sample, making it well suited for real-time medical imaging applications. These results highlight the model’s potential as a resource-efficient and practical solution for high-dimensional tasks.
–
Presenters
-
Sungeun Kim
Yonsei University
Authors
-
Sungeun Kim
Yonsei University
-
Yeon Soo Park
Yonsei University
-
Young Woo Kim
Yonsei University
-
Joon Sang Lee
Yonsei University