@Article{info:doi/10.2196/62774, author="Lei, Changbin and Jiang, Yan and Xu, Ke and Liu, Shanshan and Cao, Hua and Wang, Cong", title="Convolutional Neural Network Models for Visual Classification of Pressure Ulcer Stages: Cross-Sectional Study", journal="JMIR Med Inform", year="2025", month="Mar", day="25", volume="13", pages="e62774", keywords="pressure ulcer; deep learning; artificial intelligence; neural network; CNN; machine learning; image; imaging; classification; ulcer; sore; pressure; wound; skin", abstract="Background: Pressure injuries (PIs) pose a negative health impact and a substantial economic burden on patients and society. Accurate staging is crucial for treating PIs. Owing to the diversity in the clinical manifestations of PIs and the lack of objective biochemical and pathological examinations, accurate staging of PIs is a major challenge. The deep learning algorithm, which uses convolutional neural networks (CNNs), has demonstrated exceptional classification performance in the intricate domain of skin diseases and wounds and has the potential to improve the staging accuracy of PIs. Objective: We explored the potential of applying AlexNet, VGGNet16, ResNet18, and DenseNet121 to PI staging, aiming to provide an effective tool to assist in staging. Methods: PI images from patients---including those with stage I, stage II, stage III, stage IV, unstageable, and suspected deep tissue injury (SDTI)---were collected at a tertiary hospital in China. Additionally, we augmented the PI data by cropping and flipping the PI images 9 times. The collected images were then divided into training, validation, and test sets at a ratio of 8:1:1. We subsequently trained them via AlexNet, VGGNet16, ResNet18, and DenseNet121 to develop staging models. Results: We collected 853 raw PI images with the following distributions across stages: stage I (n=148), stage II (n=121), stage III (n=216), stage IV (n=110), unstageable (n=128), and SDTI (n=130). A total of 7677 images were obtained after data augmentation. Among all the CNN models, DenseNet121 demonstrated the highest overall accuracy of 93.71{\%}. The classification performances of AlexNet, VGGNet16, and ResNet18 exhibited overall accuracies of 87.74{\%}, 82.42{\%}, and 92.42{\%}, respectively. Conclusions: The CNN-based models demonstrated strong classification ability for PI images, which might promote highly efficient, intelligent PI staging methods. In the future, the models can be compared with nurses with different levels of experience to further verify the clinical application effect. ", issn="2291-9694", doi="10.2196/62774", url="https://medinform.jmir.org/2025/1/e62774", url="https://doi.org/10.2196/62774" }