Skip to main navigation Skip to search Skip to main content

Automatic X-ray teeth segmentation with grouped attention

Wenjin Zhong*, Xiao Xiao Ren, Han Wen Zhang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Detection and teeth segmentation from X-rays, aiding healthcare professionals in accurately determining the shape and growth trends of teeth. However, small dataset sizes due to patient privacy, high noise, and blurred boundaries between periodontal tissue and teeth pose challenges to the models’ transportability and generalizability, making them prone to overfitting. To address these issues, we propose a novel model, named Grouped Attention and Cross-Layer Fusion Network (GCNet). GCNet effectively handles numerous noise points and significant individual differences in the data, achieving stable and precise segmentation on small-scale datasets. The model comprises two core modules: Grouped Global Attention (GGA) modules and Cross-Layer Fusion (CLF) modules. The GGA modules capture and group texture and contour features, while the CLF modules combine these features with deep semantic information to improve prediction. Experimental results on the Children’s Dental Panoramic Radiographs dataset show that our model outperformed existing models such as GT-U-Net and Teeth U-Net, with a Dice coefficient of 0.9338, sensitivity of 0.9426, and specificity of 0.9821. The GCNet model also demonstrates clearer segmentation boundaries compared to other models.

Original languageEnglish
Article number64
Pages (from-to)1-15
Number of pages15
JournalScientific Reports
Volume15
Issue number1
Early online date2 Jan 2025
DOIs
Publication statusPublished - Dec 2025

Bibliographical note

Copyright the Author(s) 2025. Version archived for private and non-commercial use with the permission of the author/s and according to publisher conditions. For further rights please contact the publisher.

Fingerprint

Dive into the research topics of 'Automatic X-ray teeth segmentation with grouped attention'. Together they form a unique fingerprint.

Cite this