Published on in Vol 10 , No 11 (2022) :November

Preprints (earlier versions) of this paper are available at, first published .
Automatic Screening of Pediatric Renal Ultrasound Abnormalities: Deep Learning and Transfer Learning Approach

Automatic Screening of Pediatric Renal Ultrasound Abnormalities: Deep Learning and Transfer Learning Approach

Automatic Screening of Pediatric Renal Ultrasound Abnormalities: Deep Learning and Transfer Learning Approach

Original Paper

1Department of Pediatrics, Taichung Veterans General Hospital, Taichung, Taiwan

2Institute of Statistics, National Yang Ming Chiao Tung University, Hsing-chu, Taiwan

3Institute of Electrical & Control Engineering, National Yang Ming Chiao Tung University, Hsing-chu, Taiwan

4Department of Pediatrics, National Yang Ming Chiao Tung University, Taipei, Taiwan

5Department of Post-Baccalaureate Medicine, College of Medicine, National Chung Hsing University, Taichung, Taiwan

Corresponding Author:

Lin-Shien Fu, MD

Department of Pediatrics

Taichung Veterans General Hospital

No.1650, Section 4, Taiwan Blvd.



Phone: 886 4 23592525 ext 5909

Fax:886 4 23741359


Background: In recent years, the progress and generalization surrounding portable ultrasonic probes has made ultrasound (US) a useful tool for physicians when making a diagnosis. With the advent of machine learning and deep learning, the development of a computer-aided diagnostic system for screening renal US abnormalities can assist general practitioners in the early detection of pediatric kidney diseases.

Objective: In this paper, we sought to evaluate the diagnostic performance of deep learning techniques to classify kidney images as normal and abnormal.

Methods: We chose 330 normal and 1269 abnormal pediatric renal US images for establishing a model for artificial intelligence. The abnormal images involved stones, cysts, hyperechogenicity, space-occupying lesions, and hydronephrosis. We performed preprocessing of the original images for subsequent deep learning. We redefined the final connecting layers for classification of the extracted features as abnormal or normal from the ResNet-50 pretrained model. The performances of the model were tested by a validation data set using area under the receiver operating characteristic curve, accuracy, specificity, and sensitivity.

Results: The deep learning model, 94 MB parameters in size, based on ResNet-50, was built for classifying normal and abnormal images. The accuracy, (%)/area under curve, of the validated images of stone, cyst, hyperechogenicity, space-occupying lesions, and hydronephrosis were 93.2/0.973, 91.6/0.940, 89.9/0.940, 91.3/0.934, and 94.1/0.996, respectively. The accuracy of normal image classification in the validation data set was 90.1%. Overall accuracy of (%)/area under curve was 92.9/0.959..

Conclusions: We established a useful, computer-aided model for automatic classification of pediatric renal US images in terms of normal and abnormal categories.

JMIR Med Inform 2022;10(11):e40878



Renal abnormalities are important findings in pediatric medicine. It is well accepted that “silent” renal abnormalities can be effectively detected through ultrasound (US) screening, which makes both early diagnoses and intervention possible [1,2]. US is a safe, relatively cheap, and convenient medical modality. Portable ultrasonic probes and internet connections have largely developed in recent years, even extending the coverage of pediatric renal US screening throughout the world. However, current methods remain limited due to the lack of automated processes that accurately classify diseased and normal kidneys [3].

Common renal abnormalities identified in US images in a series of more than 1 million school children included hydronephrosis (39.6%), unilateral small kidney (19.8%), unilateral agenesis (15.9%), cystic disease (13.9%), abnormal shapes—ectopic, horseshoe, and duplication of kidney (8%)—as well as others, that is, stones, tumors, and parenchymal diseases (1.5%) [1].

Thus far, publications regarding computer-aided US image interpretation have been much fewer than those based on computerized tomography or magnetic resonance imaging [4,5]. The use of US presents unique challenges, such as different angles of image sampling, low image quality caused by noise and artifacts, high dependence on an abundance of operators, and high inter- and intra-observer variability across different institutes and manufacturers’ US systems [6]. From the review about medical US published in 2021 [7], there were only 3 studies involving deep learning for renal US image classification [5,8,9].

This study was performed to select normal pediatric renal US images, as well as different types of renal abnormalities previously mentioned, for purposes of machine learning. Through the pretreatment of original images, adequate grouping of images, and deep neural network training, we hope that renal images can be correctly classified as either normal or abnormal. The aim of this study is to establish an artificial intelligence (AI) model for screening renal abnormalities to enhance the well-being of children even in areas where there is no pediatric nephrologist.

Ethics Approval

This study was approved by the institutional review board of Taichung Veterans General Hospital (No. CE20204A).


The images used were all created from the original images in the pediatric US examination room at Taichung Veterans General Hospital from January 2000 to December 2020. Here were 4 different US machines manufactured by both Philips and Acuson, which were used in this study. All images were obtained by a US technician having more than 20 years of experience, using a 4 MHz sector transducer. We chose only images taken of a longitudinal view from the right and left kidney.

We established 2 data sets. One data set was for training, and the other was for validation. The images in these 2 data sets were totally different.

Image Preprocessing and Data Cleaning

All images were detached from their original general data, including name, date of birth, date of examination, and chart number. The size of all the images was 600x480 pixels. We processed the images using software to obtain adequate illustrations for machine learning. As shown in Figure 1, after preprocessing, the images contain a kidney, a square of liver obtained from the examination simultaneously, and a gray scale gradient seen in the left upper part of the image.

Figure 1. Preprocessing images for machine learning.
View this figure

Image Grouping

Normal images were those having a normal size and shape, as well as a clear renal cortex or medulla without hydronephrosis, hyperechogenicity, cysts, stones, or any space-occupying lesion. We prepared 330 images for this group. There were a total of 1269 abnormal renal images. The abnormalities included hydronephrosis, hyperechogenicity, cysts, stones, and space-occupying lesions. The number of images and examinations are summarized in Table 1. The hyperechogenicity of the renal US images included increased renal cortex echogenicity as compared to the liver, a poor differentiation of the renal cortex or medulla, and an inversed echogenicity of the renal cortex or medulla. These findings were judged by 2 pediatric nephrologists.

Table 1. Distribution of images and examinations in the training and testing augmented database.
DiagnosisTraining (cases/images)Testing (cases/images)Totals (cases/images)




Space-occupying lesions108/18126/45134/226


Machine Learning

We performed feature extraction with the pretrained model of ResNet-50 [8-10] in PyTorch from the data set ImageNet [11]. We used the pretrained weight of ResNet, so there was no backpropagation during feature extraction for training US images. The input data used were renal US images of 800x600 pixels in size. We normalized the dimension to 224x224 pixels prior to feeding the images into the network.

For the classification purpose, we redefined the final fully connected layers, which output image classification as abnormal or normal. After the training images went through Resnet50, there were 2048 outputs. There were 4 components in the final fully connected layer. The first was a linear layer with the 2048 feature extractions and 512 outputs. The second was rectified linear unit, which was a piecewise linear function that only outputted the positive result. Subsequently, we added the dropout layer to prevent overfitting. The 4th component was another linear layer, performing with 512 inputs and 2 outputs, which stand for the 2 categories, that is, abnormal and normal class with their probabilities.

We optimized the model with the Adam optimizer at a learning rate of 0.01 [12]. There were a total of 30 epochs used for convolutional neural network training. We created a 94 MB size model to classify normal versus abnormal renal US images. Figure 2 is a summary of our deep learning structure.

Figure 2. Brief summary of machine learning.
View this figure

Experimental Setup

We implemented the training-testing approach. The data set was randomly divided into 1272/1599 (79.55%) images for training and 327/1599 (20.45% )images for testing to establish the model. We performed a 10-time randomization of the data set to repeat the machine learning described in the previous paragraph. For validation of the 94 MB model, there was another validation data set with 327 pediatric renal US images, including 66 (20.2%) normal, 37 (11.3%) hydronephrosis, 53 (16.2%) cyst, 95 (29.1%) stone, 53 (16.2%) hyperechogenicity, and 26 (7.9%) space-occupying US images. All these images were totally different from the data set for establishing the model.

Evaluation of Performance

We evaluated the performance from a single image result. The diagnostic performance was measured by accuracy, specificity, sensitivity, positive predictive value, and negative predictive value. To calculate the above metrics, we defined an abnormal result as positive and a normal result as negative.

After 30 epochs for these 1599 pediatric renal US images, we obtained satisfactory results. The performance metrics in the test part of the data set are shown in Table 2. The accuracy in different abnormalities ranged from 95% to 100%.

Table 2. Evaluation metrics for screening different abnormalities from test renal ultrasound images in the data set.
Diagnosis (number)Accuracy (%)Sensitivity (%)Specificity (%)AUC-ROCaPPVb (%)NPVc (%)
Space-occupying lesions98.795.61000.93510097.1

aAUC-ROC: area under the receiver operating characteristic curve.

bPPV: positive predictive value.

cNPV: negative predictive value.

The accuracies of each abnormality ranged from 95.2% to 100%, with an overall accuracy as 98.4%. The area under curves (AUCs) were from 0.935 to 0.998. The AUC for overall performance was 0.961. There was no difference between these 10 random tests (P>.05). We repeated the 10 experiments using different randomizations involving 80%/20% training/test images to check the consistency of the machine learning performance. The accuracies ranged from 95.2% to 98.4%. There was no difference between these 10 tests (P>.05). We performed a 5-fold cross test, and the results are shown in Table 3.

We validated the 94 MB model through machine learning with another 327 pediatric renal US images. The classifications included 66 (20.2%) normal, 37 (11.3%) hydronephrosis, 53 (16.2%) cyst, 95 (29.1%) stone, 53 (16.2%) hyperechogenicity, and 26 (7.9%) space-occupying US images. The performances based on each single image are summarized in Table 4. Accuracy in the different abnormalities ranged from 89.9% to 94.1%, with an average of 92.3%. AUC was from 0.934 to 0.996 (Figure 3). The overall performance in AUC was 0.959. The macro F1 was 0.924.

Table 3. Results of the 5-fold cross test.

Test 1Test 2Test 3Test 4Test 5Overall
Normal accuracy (%)8087.987.987.987.986.32
Stone accuracy (%)/AUCa91.2/0.92592.9/0.89789.4/0.92389.4/0.92594.3/0.92791.60/0.927
Cyst accuracy (%)/AUC75.4/0.85890.6/0.89684.9/0.92790.6/0.89882.1/0.89185.3/0.903
hyperechogenicity accuracy (%)/AUC84.8/0.84881.8/0.85581.8/0.86281.8/0.86281.8/0.89184.2/0.859
Space-occupying lesion accuracy (%)/AUC92.5/0.90384.9/0.88194.5/0.91783.0/0.87482.6/0.86386.8/0.896
Hydronephrosis accuracy (%)/AUC100/0.96591.9/0.88889.2/0.94094.6/0.93291.4/0.87194/0.928
Overall accuracy (%)/AUC87.8/0.90389/0.88787.8/0.92887.5/0.90287.7/0.90188.3/0.900

aAUC: area under curve.

Table 4. Evaluation metrics for screening different abnormalities from other renal ultrasound images for validation.
DiagnosisUS images, n (%)Accuracy (%)Sensitivity (%)Specificity (%)AUC-ROCaPPVb (%)NPVc (%)F1-score
Normal66 (20.2)N/AdN/A90.9%N/AN/AN/AN/A
Stone93 (28.4)93.294.7N/A0.97393.292.30.927
Cyst53 (16.2)91.692.5N/A0.94091.693.80.918
Hyperechogenicity53 (16.2)89.988.7N/A0.94089.990.90.897
Space-occupying lesions26 (7.9)91.392.3N/A0.93491.396.810.923
Hydronephrosis37 (11.3)94.1100N/A0.99694.21000.957
Overall328 (100)92.996.1N/A0.95993.677.920.924e

aAUC-ROC: area under the receiver operating characteristic curve.

bPPV: positive predictive value.

cNPV: negative predictive value.

dN/A: not applicable.

eMacro F1.

Figure 3. Area under the receiver operating characteristic curves of different image abnormalities and the overall performance. AUC: area under curve.
View this figure

The main finding of this study is a useful AI model for screening abnormal pediatric renal US images. The average accuracy can be 92.9%. The results can fulfill the main purpose of this study—to develop a useful computer-aided diagnosis model for screening various pediatric renal US abnormal patterns automatically. In this study, the machine learning methods were based upon convolutional neural network and fine-tuning, along with our unique methods for image preprocessing, as well as strategies for classification, which achieved a feasible model for clinical purposes. We constructed the stable classifier that combined both the transfer learning and training from scratch, balancing the training of a medical data set taken from an adequate sample size.

Clinical applications of AI in nephrology are versatile, but the use of renal US in this field is still in its infancy [13,14]. The reports derived from renal US images alone have been relatively limited up until now, with the major reports involving acute and chronic injuries [15-17]. Most renal image studies for AI used magnetic resonance imaging, computerized tomography, and patient histology for tumors, stones, nephropathy, transplantation, and other conditions [18-21]. The key challenges associated with deep learning involving US include reliability, generalizability, and bias [22]. The basic studies for enhancing AI performance in renal US have begun and remain undergoing [23-25].

There have been 4 reports from studies involving clinical AI applications in pediatric renal US abnormalities [3, 5,8,9]. Zheng et al [3] found that the deep transfer learning method offers satisfactory accuracy in identifying congenital anomalies in the kidney and urinary tract, even when the data set is as small as only having 50 children with congenital anomalies in the kidney and urinary tract and 50 children as the control. Yin et al [5] performed a similar study to detect posterior urethral valves. Sudarharson et al [8] used 3 variant data sets for identifying renal cysts, stones, and tumors, with an accuracy rate of 96.54% in images of quality and 95.58% in images of noise. Smail et al [9] attempted to use AI for grading hydronephrosis involving the 5-point scoring system from the Society of Fetal Urology (SFU). The best recorded performance was a 78% accuracy rate by dividing hydronephrosis into mild and severe. However, the accuracy rate was only 51% when using the 5-point system. In our study, we established a single 94 MB model to classify normal versus abnormal pediatric renal US images. The items seen in the abnormalities included renal cysts, stones, and tumors, as reported by Sudarharson et al [8]. In addition, the model was able to identify images of hydronephrosis and hyperechogenicity. Comparing the results from the study performed by Smail et al [9], our results showed a better classification accuracy for hydronephrosis. The 37 validated images were moderate or severe hydronephrosis, that is, the SFU class II, III, and IV. Our model can achieve 100% sensitivity, comparing the sensitivity of 46%-54%, as previously reported [26].

In terms of SFU class I, our model had an accuracy of 71.7% (119/166). Up until now, grading of hydronephrosis has been an ongoing challenge [27]. Extremely early intervention for treatment of mild hydronephrosis remains inadequate. If a child with mild hydronephrosis is also experiencing other renal abnormalities, such as stones, cysts, or hyperechogenicity, it is highly possible our model would be capable of providing any alarming information surrounding these conditions.

The unique pretreatment of images for machine learning performed in this study was performed to provide a comparison of liver echogenicity in the simultaneous study. This step is necessary for identifying hyperechogenicity. Other abnormalities, such as hydronephrosis, cysts, stones, and tumors, showed no difference in classification, regardless of whether we inputted the images with the addition of the square containing liver echogenicity and the gray scale gradient in the left part of the image shown in Figure 1. As demonstrated in Table 4, the accuracy and sensitivity for hyperechogenicity identification was lower than it was with other abnormalities. Increased echogenicity is an important finding in evaluating muscle, thyroid, vascular, and renal diseases [28]. The gray scale US presents a general sensitivity rate of 62% to 77%, a specificity of 58% to 73%, and a positive predictive value of 92% for detecting microscopically confirmed renal parenchymal diseases. The above results reveal that the echogenicity change was not sensitive enough for detecting renal disease. Abnormalities in renal echogenicity include increased echogenicity, poor differentiation of the cortex or medulla, and inversed echogenicity of the renal cortex and medulla [29]. In practice, it is quite often that we cannot obtain a square containing homogenous liver echogenicity for purposes of machine learning. When the classification is compared by a pediatric nephrologist, the results are acceptable. It is also difficult for the naked eye to discriminate between the not-so-significant gray scale differences. Currently, the so called “radiomics” information, which can aid US imaging in AI, is emerging [30], with a more precise assessment of US pixels possibly enhancing the utility of hyperechogenicity.

A limitation of this study is the single medical center image source. More images from different hospitals, areas, ethnicities, and US companies need to be used. We conducted a small-scale external validation using US images from different companies, including General Electric, Siemens, and Toshiba. After image pretreatment, the results could be 100% sensitivity, 80% specificity, and 90% accuracy. Another limitation is the moderate image number of images contributing to the data set. We did not divide images from right or left kidney for training, though the results can be acceptable. We will further validate our method based on larger data sets.

In conclusion, this study proposed the use of an automatic model for purposes of screening various abnormalities in pediatric renal US images. We will continue to enhance the model’s performance as we conduct additional evaluation studies surrounding its future clinical applications, including being an auxiliary software for screening children’s renal abnormalities in remote areas.


This study was supported in part by a grant from Taichung Veterans General Hospital (No. TCVGH-1106506B, TCVGH-1116504C). The statistical work is partially supported by the Ministry of Science and Technology, Taiwan, R.O.C. (grant MOST 110-2118-M-A49-002-MY3 and 110-2634-F-A49-005).

Conflicts of Interest

None declared.

  1. Sheih CP, Liu MB, Hung CS, Yang KH, Chen WY, Lin CY. Renal abnormalities in schoolchildren. Pediatrics 1989 Dec;84(6):1086-1090. [Medline]
  2. Parakh P, Bhatta NK, Mishra OP, Shrestha P, Budhathoki S, Majhi S, et al. Urinary screening for detection of renal abnormalities in asymptomatic school children. Nephrourol Mon 2012;4(3):551-555 [FREE Full text] [CrossRef] [Medline]
  3. Zheng Q, Furth SL, Tasian GE, Fan Y. Computer-aided diagnosis of congenital abnormalities of the kidney and urinary tract in children based on ultrasound imaging data by integrating texture image features and deep transfer learning image features. J Pediatr Urol 2019 Feb;15(1):75.e1-75.e7 [FREE Full text] [CrossRef] [Medline]
  4. Akkus Z, Cai J, Boonrod A, Zeinoddini A, Weston AD, Philbrick KA, et al. A Survey of Deep-Learning Applications in Ultrasound: Artificial Intelligence-Powered Ultrasound for Improving Clinical Workflow. J Am Coll Radiol 2019 Sep;16(9 Pt B):1318-1328. [CrossRef] [Medline]
  5. Yin S, Peng Q, Li H, Zhang Z, You X, Fischer K, et al. Multi-instance Deep Learning of Ultrasound Imaging Data for Pattern Classification of Congenital Abnormalities of the Kidney and Urinary Tract in Children. Urology 2020 Aug;142:183-189 [FREE Full text] [CrossRef] [Medline]
  6. Liu S, Wang Y, Yang X, Lei B, Liu L, Li SX, et al. Deep Learning in Medical Ultrasound Analysis: A Review. Engineering 2019 Apr;5(2):261-275. [CrossRef]
  7. De Jesus-Rodriguez HJ, Morgan MA, Sagreiya H. Deep Learning in Kidney Ultrasound: Overview, Frontiers, and Challenges. Adv Chronic Kidney Dis 2021 May;28(3):262-269. [CrossRef] [Medline]
  8. Sudharson S, Kokil P. An ensemble of deep neural networks for kidney ultrasound image classification. Comput Methods Programs Biomed 2020 Dec;197:105709. [CrossRef] [Medline]
  9. Smail LC, Dhindsa K, Braga LH, Becker S, Sonnadara RR. Using Deep Learning Algorithms to Grade Hydronephrosis Severity: Toward a Clinical Adjunct. Front Pediatr 2020;8:1 [FREE Full text] [CrossRef] [Medline]
  10. ResNet. PyTorch.   URL: [accessed 2022-10-05]
  11. Fan R, Chang K, Hsieh C, Wang XR, Lin CJ. LIBLINEAR: a library for large linear classification. JMLR 2008;9(61):1871-1874.
  12. He K, Zhang X, Ren S. Deep residual learning for image recognition. 2016 Presented at: Proceedings of the IEEE conference on computer vision and pattern recognition; June 27-30, 2016; Las Vegas, NV, USA. [CrossRef]
  13. Pan SJ, Yang Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng 2010 Oct;22(10):1345-1359. [CrossRef]
  14. Abadi M. TensorFlow: learning functions at scale. SIGPLAN Not 2016 Dec 05;51(9):1-1. [CrossRef]
  15. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. JMLR 2014;15(56):1929-1958.
  16. Rashidi P, Bihorac A. Artificial intelligence approaches to improve kidney care. Nat Rev Nephrol 2020 Feb;16(2):71-72 [FREE Full text] [CrossRef] [Medline]
  17. Lee M, Wei S, Anaokar J, Uzzo R, Kutikov A. Kidney cancer management 3.0: can artificial intelligence make us better? Curr Opin Urol 2021 Jul 01;31(4):409-415. [CrossRef] [Medline]
  18. Kuo C, Chang C, Liu K, Lin W, Chiang H, Chung C, et al. Automation of the kidney function prediction and classification through ultrasound-based kidney imaging using deep learning. NPJ Digit Med 2019;2:29 [FREE Full text] [CrossRef] [Medline]
  19. Bandara MS, Gurunayaka B, Lakraj G, Pallewatte A, Siribaddana S, Wansapura J. Ultrasound Based Radiomics Features of Chronic Kidney Disease. Acad Radiol 2022 Feb;29(2):229-235. [CrossRef] [Medline]
  20. Ying F, Chen S, Pan G, He Z. Artificial Intelligence Pulse Coupled Neural Network Algorithm in the Diagnosis and Treatment of Severe Sepsis Complicated with Acute Kidney Injury under Ultrasound Image. J Healthc Eng 2021;2021:6761364 [FREE Full text] [CrossRef] [Medline]
  21. Nikpanah M, Xu Z, Jin D, Farhadi F, Saboury B, Ball MW, et al. A deep-learning based artificial intelligence (AI) approach for differentiation of clear cell renal cell carcinoma from oncocytoma on multi-phasic MRI. Clin Imaging 2021 Sep;77:291-298. [CrossRef] [Medline]
  22. Yildirim K, Bozdag PG, Talo M, Yildirim O, Karabatak M, Acharya UR. Deep learning model for automated kidney stone detection using coronal CT images. Comput Biol Med 2021 Aug;135:104569. [CrossRef] [Medline]
  23. Hermsen M, Volk V, Bräsen JH, Geijs DJ, Gwinner W, Kers J, et al. Quantitative assessment of inflammatory infiltrates in kidney transplant biopsies using multiplex tyramide signal amplification and deep learning. Lab Invest 2021 Aug;101(8):970-982 [FREE Full text] [CrossRef] [Medline]
  24. Farris AB, Vizcarra J, Amgad M, Cooper LAD, Gutman D, Hogan J. Artificial intelligence and algorithmic computational pathology: an introduction with renal allograft examples. Histopathology 2021 May;78(6):791-804 [FREE Full text] [CrossRef] [Medline]
  25. De Jesus-Rodriguez HJ, Morgan MA, Sagreiya H. Deep Learning in Kidney Ultrasound: Overview, Frontiers, and Challenges. Adv Chronic Kidney Dis 2021 May;28(3):262-269. [CrossRef] [Medline]
  26. Chen G, Dai Y, Zhang J, Yin X, Cui L. MBANet: Multi-branch aware network for kidney ultrasound images segmentation. Comput Biol Med 2022 Feb;141:105140. [CrossRef] [Medline]
  27. Lassau N, Estienne T, de Vomecourt P, Azoulay M, Cagnol J, Garcia G, et al. Five simultaneous artificial intelligence data challenges on ultrasound, CT, and MRI. Diagn Interv Imaging 2019 Apr;100(4):199-209. [CrossRef] [Medline]
  28. Onen A. Grading of Hydronephrosis: An Ongoing Challenge. Front Pediatr 2020;8:458 [FREE Full text] [CrossRef] [Medline]
  29. Quaia E, Correas JM, Mehta M, Murchison JT, Gennari AG, van Beek EJR. Gray Scale Ultrasound, Color Doppler Ultrasound, and Contrast-Enhanced Ultrasound in Renal Parenchymal Diseases. Ultrasound Q 2018 Dec;34(4):250-267. [CrossRef] [Medline]
  30. Grenier N, Merville P, Combe C. Radiologic imaging of the renal parenchyma structure and function. Nat Rev Nephrol 2016 Jun;12(6):348-359. [CrossRef] [Medline]

AI: artificial intelligence
AUC: area under curve
SFU: Society of Fetal Urology
US: ultrasound

Edited by C Lovis; submitted 12.07.22; peer-reviewed by M Lee, Y Fan, SJC Soerensen , V Khetan, S Yin; comments to author 03.08.22; revised version received 16.09.22; accepted 02.10.22; published 02.11.22


©Ming-Chin Tsai, Henry Horng-Shing Lu, Yueh-Chuan Chang, Yung-Chieh Huang, Lin-Shien Fu. Originally published in JMIR Medical Informatics (, 02.11.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on, as well as this copyright and license information must be included.