Published on in Vol 14 (2026)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/79389, first published .
Two-Minute Deep Learning–Powered Brain Quantitative Mapping: Accelerating Clinical Imaging With Synthetic Magnetic Resonance Imaging

Two-Minute Deep Learning–Powered Brain Quantitative Mapping: Accelerating Clinical Imaging With Synthetic Magnetic Resonance Imaging

Two-Minute Deep Learning–Powered Brain Quantitative Mapping: Accelerating Clinical Imaging With Synthetic Magnetic Resonance Imaging

1Precision and Intelligence Medical Imaging Lab, Beijing Friendship Hospital, Capital Medical University, No.95 Yongan Road, Xicheng District, Beijing, China

2Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China

3Department of Medical Engineering, Beijing Friendship Hospital, Capital Medical University, Beijing, China

4Department of Radiology, Beijing Anzhen Hospital, Capital Medical University, Beijing, China

5Department of Radiology, Aerospace Center Hospital, Beijing, China

6School of Biological Science and Medical Engineering, Beihang University, Beijing, China

Corresponding Author:

Pengling Ren, PhD


Background: Quantitative magnetic resonance imaging (MRI) is an advanced technique that can map the physical properties (T1, T2, and proton density [PD]) of different tissues, offering crucial insights for disease diagnosis. Nonetheless, the practical application of this technology is indeed constrained by several factors, with the most notable being the protracted scanning duration.

Objective: This study aimed to explore whether deep learning (DL)–based superresolution reconstruction of ultrafast whole brain synthetic MRI can obtain quantitative T1/T2/PD maps that are closely approximated to those from routine clinical scans, while substantially shortening scan time and preserving diagnostic image quality.

Methods: A total of 151 healthy adults and 7 individuals with different pathologies were prospectively enrolled. Each individual was examined twice on a 3.0T scanner using routine and fast synthetic MRI protocols. The routine scans (acquisition matrix: 320×256) were interpolated to 512 by 512 for clinical display and served as reference images. The fast scans (acquisition matrix: 192×128) were preprocessed to 256 by 256 and used as inputs to a superresolution generative adversarial network (SRGAN), which reconstructed them to the same 512 by 512 interpolated resolution as the reference. For each quantitative chart, 120 (75.95%) healthy individuals’ images were used for training, and 38 (24.05%) individuals’ images (healthy individuals: n=31, 19.62%; patients: n=7, 4.43%) were used for testing. Agreement was assessed with a paired t test, two 1-sided tests, Bland-Altman analysis, and coefficients of variation.

Results: DL reconstructed and reference T1/T2/PD values were strongly correlated (T1: R²=0.98; T2: R²=0.97; and PD: R²=0.99). The slopes of the linear regression were near 1.0 both for T1 (0.9418) and PD (0.9946), whereas T2 values were moderate, as the slope of the linear regression was 0.8057. Additionally, the average biases of T1, T2, and PD values were small (0.93%, −0.85%, and 0.31%, respectively). The intra- and intergroup coefficient of variation for most of the brain regions stayed below 5%, especially for PD values, and after DL reconstruction, it still has quantitative accuracy for lesions. Quantitative and qualitative analyses of image quality also indicate that SRGAN markedly suppressed noise and artifacts in fast acquisitions, restoring structural fidelity (structural similarity image measure) and signal fidelity (peak signal-to-noise ratio) close to the level of routine scans while substantially improving perceptual naturalness over fast scans (as measured by the naturalness image quality evaluator), although not yet matching that of routine imaging.

Conclusions: SRGAN superresolution applied to ultrafast synthetic MRI yields whole brain T1, T2, and PD maps that show strong correlation with routine synthetic MRI while halving acquisition time and maintaining diagnostic image quality. Although T1 and PD values exhibit near-ideal agreement, and T2 values demonstrate a moderate systematic underestimation, this approach represents a promising step toward accelerating clinical deployment of quantitative brain imaging.

JMIR Med Inform 2026;14:e79389

doi:10.2196/79389

Keywords



Quantitative magnetic resonance imaging (qMRI) can reflect the inherent properties of human tissue relaxation times and proton density (PD), providing valuable support for verifying visual assessments of structures and tissues against a normal quantitative standard, thereby holding significant clinical diagnostic value [1]. Nevertheless, the extended scan time of qMRI imposes limitations on its practical use. Synthetic MRI, based on multidynamic, multiecho sequences, can provide multiple quantitative information and multicontrast images in a single scan, enabling fast qMRI scans [2,3]. It has been proven to have comparable diagnostic value to conventional scanning methods, with the generated quantitative data being valuable for disease identification [4-7]. However, the single scan time of this imaging method still typically ranges from 4 to 7 minutes, particularly for brain MRI imaging, which can lead to individual discomfort and potentially impact image quality. Therefore, improving the imaging speed of synthetic MRI is the key to realizing ultrafast qMRI.

It is also crucial to ensure the quality of qMRI and the reliability of quantitative values while accelerating imaging. Usually, fast imaging methods such as parallel acquisition and compressed sensing may result in a loss of image quality [8]. The reconstruction technology based on deep learning (DL) has been applied in various medical image reconstruction tasks, and its development makes it possible to solve the trade-off between image quality and scan time. Generative adversarial networks, renowned for generating high-fidelity superresolution images, have been successfully used in MRI [9]. Many researchers have implemented the concept of generative adversarial networks for MRI image reconstruction, achieving high-quality reconstruction from highly undersampled data [10-14]. In addition, the generative adversarial approach can be used for tasks, such as artifact correction and denoising in MRI [15,16]. In clinical practice, in addition to focusing on image quality, it is more important to determine whether the DL-based reconstruction method can provide less bias and consistent quantitative data [17].

To shorten synthetic MRI quantitative acquisitions and increase clinical throughput, we hypothesized that DL superresolution can generate T1/T2/PD maps that are close to routine long acquisition quantitative values while maintaining image quality, thereby providing faster yet equally reliable neuroimaging and ultimately enhancing diagnostic precision and patient comfort.


Participants

This study was approved by the Medical Research Ethics Committees of the Beijing Friendship Hospital, Capital Medical University (2020-P2-122-02). Written informed consent was obtained from all participants prior to enrollment. A total of 151 healthy individuals were prospectively recruited at 1 institution for this study. The inclusion criteria were as follows: (1) the age of the individual was greater than or equal to 18 years, (2) the individual showed no structural changes or signs of disease in brain MRI, and (3) the individual showed no contraindications or adverse reactions to MRI examination. The exclusion criteria were as follows: (1) there were obvious artifacts in the images, and (2) the individual failed to complete all MRI examinations. In addition, we also collected images from 7 individuals with different pathologies (white matter [WM] hyperintensities, cerebral infarcts, and encephalomalacia) to evaluate the potential for clinical application.

Synthetic MRI Acquisition

All MRI examinations were performed using a SIGNA Pioneer 3.0 T MRI scanner (GE Healthcare). All individuals underwent two synthetic MRI scans using the multidynamic multiecho sequence (MDME sequence, named MAGiC in the GE scanner): a routine scan protocol and a fast scan protocol. The routine scan uses commonly used clinical acquisition parameters: TR=4000 ms; TE1=18.3 ms; TE2=91.4 ms; TI=28.2 ms; thickness=5 mm; FOV=220×220 mm; acquisition matrix=320×256; echo-train length=16; bandwidth=31.25 kHz; delay times=170, 670, 1840, and 3840 ms; and acceleration factors=2, 24 slices, with a pixel size of 0.7×0.8 and an acquisition time of 4 minutes 55 seconds. Fast scan modifies the acquisition matrix and acceleration factor, which affect the acquisition time, by changing the acquisition matrix to 192×128 and the acceleration factor to 3. The remaining parameters remained the same as those of the routine scan, with a pixel size of 1.1×1.7 and an acquisition time of 1 minute 52 seconds.

Data Preprocessing

The same method was used to retrieve the quantitative maps (T1 maps, T2 maps, and PD maps) as mentioned in a previous study [18]. In this study, quantitative maps from the routine scan (reconstruction matrix=512×512, 0.43×0.43 mm² pixel) served as the high-resolution (HR) reference, whereas those from the fast scan (reconstruction matrix=256×256, 0.86×0.86 mm² pixel) were used as low-resolution (LR) inputs. DICOM images were directly read and prepared for network training. It should be noted that the maximum image intensities of T1 maps, T2 maps, and PD maps are 43,000, 20,000, and 1600, respectively, which correspond to 10 times the actual values. This scaling is due to the rescaling slope value of 0.1 specified in the DICOM header file.

Network Architecture

The detailed generator network structure is shown in Figure 1A. It starts with a 9 by 9 convolutional filter and connects to 5 residual blocks, each of which consists of two 3 by 3 convolutional layers alternating with batch normalization layers, and activation is performed using a rectified linear unit (ReLU) function. After the residual network, a 3 by 3 convolution and a 1 by 1 convolution are connected, and a subpixel convolutional layer is used to upsample the image. In the generator network, except for the last layer that uses tanh as the activation function, all other layers use ReLU as the activation function. The discriminator network structure is shown in Figure 1B. In the discriminator network D, our model first uses an architecture comprising eight 3×3 convolutional layers and Leaky ReLU function activation. All convolutional layers except the first have batch normalization layers. The Sigmoid activation function is applied to the last fully connected layer, which outputs the probability of discriminating whether the input HR image is a real HR image or an image generated by the generator. The network architecture includes a pretrained VGG-19 network, which is used for feature extraction and loss function calculation.

Figure 1. The architecture of the superresolution generative adversarial network. BN: batch normalization; DL: deep learning; HR: high resolution; ReLU: rectified linear unit.

Loss Function

The pixel-by-pixel error method used is the L1 loss function, also known as the mean absolute error, which is calculated using equation 1. Although SRGAN completely discards the pixel-by-pixel error, we still add this error in a certain proportion during the actual training process to amplify the difference and guide the optimization of the model. The cross-entropy loss function of the discriminator is used as the adversarial error of the network, and the calculation method is expressed using equation 2. In addition to using the adversarial error, SRGAN also uses a content error, which is defined as the Euclidean distance between the superresolved image and the feature map of the reference image, and the calculation method is expressed using equation 3. The content error is used to align the content of LR images and HR images, which plays the same role as the mean square error. In this network, the pretrained VGG-19 network is used to extract the feature parameter map. Therefore, the loss function for the generator in this network is described using equation 4.

Lmae=1CHWi,j,kIi,j,kDLIi,j,k(1)

where IDLis the output image of the generator and IGTis the input reference image.

Lgan=ilogD(G(IDL))(2)

where G and D are the generative network and the discriminative network, respectively.

LVGG=1CjHjWjx=1Wjy=1Hjc=1Cj(j(G(IDL)x,y,c)j(G(I)x,y,c))2(3)

where (j) is the feature map obtained from the jth convolutional layer of the VGG-19 network, Cj, Hj, and Wj are the number of channels and height and width of the feature map, respectively, IDL is the output image of the generator, and I is the input reference image.

Ltotal=Lmae+103Lgan+2×106LVGG(4)

Implementation

Approximately 80% (120 healthy individuals) of the dataset were allocated to training, and the remaining individuals (healthy individuals: 31, 19.62%; and patients: 7, 4.43%) were used for testing. Slices from each individual were pooled and randomly shuffled within their respective sets to prevent any sequential bias and improve the performance of the model. Before training, all quantitative maps (T1, T2, and PD) underwent intensity normalization to match the output range of the generator network, that is, imagenorm=(pixelvalue/normalizepara)×21, where normalize_para is set to 45,000, 22,000, and 1800 for T1, T2, and PD maps, respectively.

The DL models for T1, T2, and PD maps were trained separately but with identical hyperparameters. Each model was implemented using the TensorLayer framework [19] and optimized via the adaptive moment estimation (Adam) optimizer with a learning rate of 1×103 and step-wise decay every 1000 iterations. Training was performed on an NVIDIA Tesla V100 GPU with a batch size of 8 and a total of 200 epochs. All other parameters were kept at their default settings.

Quantitative Value Measurements

The flowchart illustrating the process of quantitative value measurements is presented in Figure 2.

For healthy participants, quantitative value analysis of brain tissue and brain regions was conducted. First, brain tissue segmentation was performed on each individual’s quantitative map using the New Segment tool within SPM [20] software. Then, quantitative value extraction of gray matter (GM) and WM was performed using self-written programs in the MATLAB (The MathWorks Inc.) platform.

Figure 2. The process of quantitative value measurements for healthy participants. AAL: anatomical automatic labeling; GM: gray matter; JHU: Johns Hopkins University; WM: white matter.

In addition to analyzing quantitative values for both GM and WM tissue, analysis was also performed by different brain regions for clinical evaluation and application. Using the anatomical automatic labeling atlas of the Montreal Neurological Institute (MNI), 10 GM regions of interest (ROIs) were created on anatomical structures, including frontal cortex, temporal cortex, parietal cortex, occipital cortex, insula, hippocampus, caudate nucleus, putamen, pallidum, and thalamus. Additionally, using the JHU-WhiteMatter-48 atlas, 9 WM ROIs were generated, including the middle cerebellar peduncle, genu of corpus callosum, splenium of corpus callosum, cerebral peduncle, internal capsule, anterior corona radiata, superior corona radiata, posterior corona radiata, and external capsule [21]. In all analyses, right and left quantitative data were averaged. The quantitative value of the corresponding brain region in the quantitative map reconstructed by DL and from routine scans was obtained for each individual for subsequent statistical analysis.

For the images of individuals with pathologies, an experienced radiologist used the ITK-SNAP software (version 3.8.0; [22]) to delineate the ROI of typical lesions manually and acquired the mean value of the signal in the ROI. To ensure paired comparison, identical ROI size and location were replicated on the DL reconstructed and reference routine scan images. A total of 12 ROIs were obtained from 7 (4.43%) patients (individual 3: 3 ROIs, individual 5: 3 ROIs, individual 7: 2 ROIs, and the remaining 4 individuals: 1 ROI each).

Image Quality Evaluation

Two full-reference evaluation indices—peak signal-to-noise ratio (PSNR) and structural similarity image measure (SSIM)—were used in this study for image quality assessment [23]. The larger the value of the 2 indices, the better the image quality. The PSNR was defined by equation 5:

MSE=m=1Mn=1N[R(m,n)I(m,n)]2M×N
PSNR=20logL2MSE(5)

The calculation method of SSIM is given by equation 6:

SSIM(X,Y)=(2μXμY+C1)(2σXY+C2)(μX2+μY2+C1)(σX2+σY2+C2)(6)
C1=(K1L)2,C2=(K2L)2

where μₓ and μᵧ, σₓ and σᵧ, and σₓᵧ are the local means, SDs, and cross-covariance for the DL quantitative images and routine scans, respectively, and C1 and C2 are 2 quantities used to stabilize the division in the case of a weak denominator, with L=65,535, K1 = 0.01, and K2= 0.03.

In addition, a no-reference image quality assessment method, the naturalness image quality evaluator (NIQE), was used to evaluate the quality of quantitative images reconstructed by DL, with fast scan and with routine scan, respectively [24]. Unlike PSNR and SSIM, the lower the value of NIQE, the less distortion and higher image quality.

Statistical Analysis

For the quantitative analysis of brain tissue in healthy participants, Paired t-test was used to compare the differences in quantitative values between DL and routine scans, and equivalence was confirmed using two one-sided tests (TOST). Linear regression analysis and Bland-Altman analysis were used in the study, and we combined GM and WM to comprehensively evaluate overall bias. This method captures the overall trends and variability of different tissue types, which is crucial for evaluating the performance of DL reconstruction in clinically relevant contexts. For the region-based quantitative value analysis in healthy participants, coefficients of variation (CVs) were calculated within each imaging method (intragroup CV) and across each imaging method (intergroup CV). Intergroup CV was calculated using the average of the quantitative values from each method. For the images of individuals with pathologies, Paired t-test was used to compare quantitative values in different ROIs between groups. All statistical analyses were performed using SPSS 22.0 and GraphPad Prism 9. A P value <.05 was considered statistically significant.

Ethical Considerations

This study was approved by the Medical Research Ethics Committees of Beijing Friendship Hospital, Capital Medical University (2020-P2-122-02). All participants provided informed consent before participating. Participants’ privacy and confidentiality were protected, with all data being deidentified during analysis. No compensation was provided for participation. No identifiable information of participants was included in the paper or supplementary materials in Multimedia Appendix 1. All procedures adhered to ethical guidelines for participant autonomy, safety, and confidentiality.


Time Consumption

The average whole brain acquisition time was 4 minutes 55 seconds for routine scans, with an additional 1 minute for vendor postprocessing. Fast scans were completed in 1 minute 52 seconds; applying the SRGAN network trained in this study then reconstructed, the 512 by 512 quantitative T1/T2/PD maps were completed in approximately 1 second. Thus, the combined acquisition and reconstruction time was reduced from 5 minutes 55 seconds to 1 minute 53 seconds, promoting the wider clinical application of quantitative neuroimaging and synthetic MRI.

Quantitative Value Accuracy

A paired t test showed significant differences in GM tissue values between DL and fast scans (T1: P<.001, PD: P=.002), while there was no significant difference in GM tissue values between DL and routine scans at T1 (P=.66) and PD (P=.18). However, compared with routine scans, T2 values still showed significant differences after DL (P<.001). For WM, compared with routine scans, there was no significant difference in T2 after DL (P=.11). Although there were still significant differences in T1 and PD, the differences were reduced compared with fast scans (Table 1).

Table 1. The quantitative value of brain tissue obtained by fast scan, deep learning (DL), and routine scan.
Fast, mean (SD)DL, mean (SD)Routine, mean (SD)P valueaP valueb
GMc
T1 (ms)1200.03 (36.01)1230.17 (34.12)1232.65 (34.98)<.001.66
T2 (ms)97.87 (2.07)91.36 (2.60)93.33 (3.50)<.001<.001
PD (%)79.69 (2.35)80.29 (1.51)80.11 (1.23).002.18
WMd
T1 (ms)814.88 (35.78)855.06 (32.12)837.86 (35.88)<.001<.001
T2 (ms)84.57 (2.17)80.23 (1.61)79.92 (2.05)<.001.11
PD (%)67.55 (1.94)68.49 (1.44)68.21 (1.29)<.001.01

at test results between DL and fast scans.

bt test results between DL and routine scans.

cGM: gray matter.

dWM: white matter.

Setting the clinically acceptable brain tissue quantification value limit to ±5% of the mean values from the routine scans (T1_GM epsilon: 61.63 ms, T2_GM epsilon: 4.67 ms, PD_GM epsilon: 4.01%, T1_WM epsilon: 41.89 ms, T2_GM epsilon: 4.00 ms, and PD_WM epsilon: 3.41%), the 90% TOST CI for the brain tissue in 3 quantitative maps (T1_GM: −11.88 to 6.91, T2_GM: −2.48 to−1.46, PD_GM: −0.04 to 0.41, T1_WM: 10.94 to 23.46, T2_WM: −0.01 to 0.03, and PD_WM: 0.10 to 0.45) were all within the predefined margins (P<.001).

Figure 3 shows the relationship and Bland-Altman plots between the T1, T2, and PD values obtained by DL and routine scans (reference). The correlations to the reference value were excellent, as the R2 for the T1, T2, and PD values of DL compared to the reference value was 0.98, 0.97, and 0.99, respectively. The slopes of the linear regression were near 1.0 both for T1 (0.9418) and PD (0.9946). In contrast, the T2 values were moderate, as the slope of the linear regression was 0.8057 (Figure 3A,C,E). In addition, the average percentage bias for T1, T2, and PD was 0.93%, −0.85%, and 0.31%, respectively. The 95% limits of agreement were 6.20% to −4.34% for T1, 3.00% to −4.70% for T2, and 1.98% to −1.35% for PD (Figure 3B,D,F). Similarly, Figure 4 shows the relationship and Bland-Altman plots of the T1, T2, and PD acquired by DL and fast scans. The results of linear regression analysis also showed a strong correlation between the 2 methods in T1 and PD (T1: R2=0.99 and PD: R2=0.98), with a slightly lower correlation in T2 (R2=0.80). The slopes of the linear regression were near 1.0 both for T1 (0.9680) and PD (0.9381). In contrast, the T2 values were moderate, as the slope of the linear regression was 0.7755 (Figure 4A,C,E). The results of the bias trend of T1, T2, and PD between these 2 methods also showed similar results, that is, the mean percentage difference for T1 and PD was smaller (3.66% and 1.09%, respectively), whereas the mean percentage difference for T2 was slightly higher (−6.32%) (Figure 4B,D,F).

Figure 3. Scatterplots and Bland-Altman plot results for deep learning versus routine scans. The linear regression lines represent a strong linear relationship with a robust fit between the tissue value measurements of T1, T2, and proton density (PD) obtained from the 2 magnetic resonance imaging methods. Bland-Altman plots show mean percentage differences (solid red line) and the 95% CIs (dashed black line). SRGAN: superresolution generative adversarial network.
Figure 4. Scatterplots and Bland-Altman plot results for deep learning versus fast scans. The linear regression lines represent a strong linear relationship with a robust fit between the tissue value measurements of T1, T2, and proton density (PD) obtained from the 2 magnetic resonance imaging methods. Bland-Altman plots show mean percentage differences (solid red line) and the 95% CIs (dashed black line). SRGAN: superresolution generative adversarial network.

Table 2 presents the intragroup CV and intergroup CV of brain regions’ quantitative values acquired by DL and routine scans. The intragroup variability of the quantitative values obtained by these 2 methods is small, as the intragroup CV for the routine scan’s PD values ranges from 1.64% to 6.44%, and the PD values for DL range from 1.98% to 5.84%. The highest intragroup CV of T1 and T2 values was noted in the pallidum (T1: 18.77% vs 17.25%; and T2: 17.00% vs 16.33%). The intergroup CV was lower than the intragroup CV for all brain regions. For intergroup CV, there were no significant differences in the quantitative values of T1, T2, and PD for the WM region after DL reconstruction. However, significant differences in T2 values were observed in the GM regions, including the frontal cortex, temporal cortex, parietal cortex, occipital cortex, insula, and hippocampus.

Table 3 presents the quantitative values in ROI of typical lesions acquired by DL and routine scans. Compared with the quantitative values under routine scans, there were no significant differences in the quantitative values reconstructed by deep learning (T1, P=.67; T2, P=.73; and PD, P=.75).

Table 2. Intragroup and intergroup coefficient of variation (CV) for T1, T2, and proton density (PD) based on anatomical automatic labeling (AAL) and Johns Hopkins University (JHU) atlas in healthy individuals.
Brain regionIntragroup CV (%)
Intergroup CV (%) (T1/T2/PD)P value (T1/T2/PD)
DLa (T1/T2/PD)Routine (T1/T2/PD)
AAL atlas
Frontal7.41/7.28/3.628.20/7.14/3.442.07/5.17/0.61.29/.001/.94
Temporal5.10/8.00/3.315.50/8.62/3.271.65/4.98/0.40.38/.003/.83
Parietal9.32/14.40/3.029.08/15.90/2.971.30/8.11/0.45.38/<.001/.42
Occipital7.43/11.07/2.157.19/13.32/2.411.72/5.10/0.47.60/.005/.55
Insula9.03/15.23/2.399.86/13.44/2.531.80/7.12/0.72.95/.02/.36
Hippocampus8.95/11.43/2.539.02/14.30/2.712.62/6.08/0.72.22/.005/.34
Caudate13.69/16.20/5.2914.65/17.14/5.372.60/5.32/0.76.70/.35/.92
Putamen13.48/12.37/2.6214.48/17.33/3.472.04/5.44/0.73.99/.11/.92
Pallidum18.77/16.33/2.2617.25/17.00/2.613.54/3.37/0.85.51/.56/.50
Thalamus13.37/12.39/2.3813.99/13.96/3.001.69/4.05/0.88.77/.27/.67
JHU atlas
MCPb7.28/7.04/1.725.89/8.13/1.641.67/2.79/0.51.50/.08/.28
GCAc15.33/9.77/5.8415.22/9.11/6.441.99/2.89/0.76.87/.52/.55
SCAd6.44/3.70/1.986.00/4.64/1.941.74/1.71/0.63.19/.05/.13
CPe10.16/7.81/4.779.62/7.54/5.012.73/2.55/1.08.41/.14/.55
ICf9.55/4.29/3.959.63/4.52/4.201.30/1.74/0.68.83/.21/.45
ACRg9.01/5.62/2.949.78/6.91/3.861.97/1.34/0.91.92/.97/.98
SCRh9.95/4.68/3.738.35/4.95/3.072.16/1.31/1.03.27/.14/.28
PCRi14.19/3.72/4.2512.20/4.59/3.692.60/2.49/0.96.42/.03/.23
ECj11.32/5.02/2.8211.23/4.89/3.361.38/0.92/0.95.65/.69/.21

aDL: deep learning.

bMCP: middle cerebellar peduncle.

cGCA: genu of corpus callosum.

dSCA: splenium of corpus callosum.

eCP: cerebral peduncle.

fIC: internal capsule.

gACR: anterior corona radiata.

hSCR: superior corona radiata.

iPCR: posterior corona radiata.

jEC: external capsule.

Table 3. Quantitative value in the region of interest (ROI) of typical lesions acquired by deep learning (DL) and routine scan.
ROIDescriptionT1a (ms)T2b (ms)PDc,d (%)
DLRoutineDLRoutineDLRoutine
1WMHe1019.681052.20100.84112.980.7079.16
2WMH1034.411052.7896.3296.7175.2177.09
3WMH955.681013.2094.78104.5270.7772.97
4CIf1217.301398.16126.53149.7180.0081.26
5Encephalomalacia3597.263683.45310.67276.99102.83106.59
6WMH1088.411214.51109.50121.2382.1184.70
7CI1832.001679.00136.10133.8090.5090.90
8Encephalomalacia3719.043892.54292.16272.32101.31104.98
9WMH1734.531517.03163.45182.0185.1486.37
10WMH1171.001238.00110.60124.7080.7081.10
11Encephalomalacia3666.003469.00350.20295.80107.80106.30
12WMH1253.001285.00113.20105.2078.7083.00

aP=.67.

bP=.73.

cPD: proton density.

dP=.75.

eWMH: white matter hyperintensities.

fCI: cerebral infarcts.

Image Quality

Figure 5 and Tables 4 and 5 compare image quality among the 3 protocols. After SRGAN reconstruction, DL maps simultaneously preserved quantitative accuracy and outperformed fast acquisitions in every objective metric, demonstrating high image quality.

Figure 5. Representative results of the proposed method. Results of fast scans (left), deep learning (DL) reconstruction (middle), and routine scans (right) are displayed for T1 map, T2 map, and proton density (PD) map.
Table 4. Comparison of image quality between the deep learning quantitative maps and routine scans using the structural similarity image measure (SSIM) and peak signal-to-noise ratio (PSNR).
PSNR, mean (SD)SSIM, mean (SD)
T1 maps21.94 (2.36)0.82 (0.08)
T2 maps29.14 (1.81)0.93 (0.03)
PDa maps58.01 (3.15)0.99 (0.01)

aPD: proton density.

Table 5. The naturalness image quality evaluator of quantitative maps under fast scan, deep learning (DL), and routine scan.
Fast scan, mean (SD)DL, mean (SD)Routine, mean (SD)
T1 maps29.46 (6.42)13.36 (1.88)10.90 (1.84)
T2 maps30.56 (5.57)18.79 (1.46)15.21 (1.78)
PDa maps34.66 (2.99)11.74 (2.17)10.85 (2.53)

aPD: proton density.


Principal Findings

In this paper, we trained an SRGAN network to reconstruct whole brain T1, T2, and PD maps from accelerated synthetic MRI data. Experimental results show that the application of DL for reconstruction can generate quantitative maps from fast scans with a 50% reduction in acquisition time. The quantitative instability caused by LR acquisition is reduced, with no significant difference between the reconstructed and routine scan values, and intra- and inter-CVs are small in most brain regions. Our study also found that the resulting maps still allow clear visual detection of common pathologies, thereby helping to shorten clinical scanning time.

Comparison With Prior Work

In previous studies, the use of commercial algorithms for fast image reconstruction resulted in a total brain acquisition time exceeding 3 minutes [25]. Our research further reduces the collection time to less than 2 minutes while maintaining accuracy. DICOM raw data were used as input and output for the network, which conforms to the standard practice of DICOM image processing, where metadata (including rescaled slopes) are crucial for accurate image analysis and quantitative measurement. By adhering to these specifications, the study maintains the integrity of the qMRI data, facilitating a reliable evaluation of the DL reconstruction techniques’ impact on the accuracy of quantitative brain tissue values.

The MDME sequence was used to acquire the quantitative maps, and several studies have investigated the repeatability and reproducibility of quantitative values using this sequence. In this study, we applied multiple quantitative metrics and statistical methods to analyze differences among DL reconstructed, fast scan, and routine scan values. The results of our study, as indicated by the paired t tests, provide valuable insights into the performance of DL reconstruction and demonstrate that it improves the accuracy of fast quantitative MRI scans, bringing tissue-specific T1, T2, and PD values closer to those of routine clinical acquisitions. In GM, the method achieved no statistically significant difference compared to routine scans. However, a small but statistically significant underestimation in T2 remained. A previous study examining the influence of parameter modifications on MDME-derived quantitative values reported that alterations in acquisition matrix or acceleration factor induced negligible variance in measured constants, with the exception of CSF T2 estimation [26,27]. Notably, this investigation was performed on an isolated, homogeneous phantom; consequently, partial volume effects and other anatomical confounders were absent. The present fast scan protocol, in contrast, simultaneously varies both matrix size and acceleration factor without a commensurate increase in the number of echo times. This divergence in experimental design may compromise the accuracy of T2 mapping—particularly through reduced sampling density in echo space—thereby introducing systematic bias into the fitted relaxation constants. However, TOST analysis further indicates that the 90% CIs of all brain tissue parameters in the 3 quantitative maps are within clinically acceptable ranges, providing the possibility for the clinical applications of DL quantitative reconstruction.

In our quantitative analysis, the linear regression slope for T2 values (DL vs routine) was 0.8057, which deviates from the ideal value of 1.0. Clinically, this may affect the diagnostic accuracy of high T2 lesions, leading to misdiagnosis or delayed intervention. Most of the samples in this study were healthy volunteers, and the accuracy of DL for T2 values under specific pathological manifestations was not validated. Such bias may arise primarily from the L1 loss component, which enforces pixel-wise fidelity at the expense of high-frequency details and can lead to oversmoothing—particularly in regions with high tissue contrast or rapid T2 decay. Consequently, although the average T2 error across healthy tissue may satisfy TOST criteria, it may reduce the accuracy of this method in extreme cases of dynamic range that often occur in pathology. Despite this limitation, the SRGAN model demonstrated good performance in overall image quality metrics, indicating its practicality in routine clinical workflows, although it requires appropriate attention.

Linear regression and Bland-Altman plots also demonstrated that the vast majority of quantitative parameters exhibited high correlation and agreement between the DL and routine scans, with 95% of data points lying within the limits of agreement, which is in line with previous research [28,29]. In contrast, T2 estimates derived from the DL exhibited a marked bias relative to the reference (Figure 3 vs Figure 4). These findings indicate that the DL model successfully compensates for systematic errors introduced by the fast scan, yielding quantitative values that closely approximate those obtained under routine scans. Such error mitigation directly supports the primary objective of this study: to establish the feasibility of using DL for accurate, ultrafast quantitative MR reconstruction.

In addition to measuring the quantitative values of gray and WM, the study also applied classic atlases from brain science analysis: the anatomical automatic labeling and the Johns Hopkins University atlases [30]. These atlases divide the brain into regions based on structure or function, which allows for a more comprehensive characterization of the accuracy and reliability of the quantitative values. Generally, the intragroup CV should ideally be below 10%, and the intergroup CV should remain under 15%, yet a significant intragroup CV was observed in deep GM nucleus regions, such as pallidum. This may be due to the inhomogeneity of B1 and differences in data acquisition parameters or variations in the age and gender of the individuals, which have also been shown and analyzed in previous studies [31-33]. Additionally, these numerical disparities may stem from spatial registration errors between individual quantitative maps and brain region templates, potentially affecting the extraction of quantitative values. Our study also observed significant differences in the T2 CVs between DL and routine scans in some GM brain regions. This difference indicates that although the SRGAN model has good consistency at the overall organizational level, it may introduce variability at the regional scale, which may be due to the model’s sensitivity to local texture or noise, potentially limiting its practicality in precise quantitative neuroimaging applications. However, the intra- and inter-group CV for the remaining brain regions remained below 5%, especially for PD values, which show minimal intragroup and intergroup variation across all brain regions. This implies that T2 is relatively sensitive in quantitative analysis and should be a key focus in disease analysis.

The outlined lesion areas in the study are predominantly distributed in the WM. The average T1, T2, and PD quantitative values within these lesion areas exceed 1000 milliseconds, 100 milliseconds, and 80 milliseconds, respectively, which are higher than the ranges observed in the healthy participants’ quantitative values. Furthermore, different types of lesions exhibit distinct quantitative values. WM hyperintensities and acute-phase cerebral infarction have similar quantitative values. In contrast, the quantitative values in encephalomalacia are significantly higher than those in the other 2 types of lesions, particularly the T1 values within encephalomalacia (approaching 4000 ms), which are similar to the T1 value of cerebrospinal fluid because the components of encephalomalacia are similar to cerebrospinal fluid. These findings align with clinical knowledge, indicating that lesion areas generally have longer T1 and T2 relaxation times. Importantly, the quantitative values obtained through DL still maintain diagnostic efficacy. Through careful examination of quantitative values, it was found that with a small amount of lesion data, although the difference in quantitative values between DL and routine scans did not reach statistical significance, 5 reconstructed T2 values were lower than those of the routine scan in 7 WMH and infarct ROIs. Given that these lesions have already exhibited high T2 relaxation times (usually >100 ms), any systematic underestimation may weaken their abnormal manifestations. This may be because superresolution networks are only trained on quantitative images of healthy adults and do not include any pathological cases. Therefore, the model has not yet learned the characteristic diastolic curves of common brain injuries, such as significant prolongation of WM hyperintensities on T2. This lack of exposure may lead to the network interpreting high T2 values in lesions as outliers or reconstruction artifacts, thereby regressing these values to the healthy tissue distribution. Therefore, the sample size will continue to be expanded in the future to verify the quantitative accuracy of DL reconstruction in various neurological diseases.

While the DL maps have basically met the predefined quantitative equivalence margins, the no-reference NIQE metric reveals a residual perceptual gap: compared with fast scanning, the DL image quality has significantly improved (as indicated by a decreased NIQE score), but compared with routine scans, the NIQE scores of the 3-parameter images are slightly higher. This suggests that the current model ensures the accuracy of clinical quantification and significantly improves accuracy on the basis of fast scans, but further improvement is needed to achieve complete perceptual equality with routine HR imaging.

Limitations

This study still has some limitations. First, the study adopted the widely validated SRGAN framework and focused on quantitative agreement and image quality comparison; however, emerging architectures such as ESRGAN or SwinIR were not compared, nor were traditional non-DL upsampling methods used as baselines. Second, the network was trained exclusively on data acquired with a GE 3.0 T scanner and entirely on normal-appearing brain tissue (only 7 patients with 3 common lesion types were used for testing), so the performance of the method on pathological tissue is a preliminary proof of concept rather than evidence of clinical equivalence. The small number of pathology cases limits statistical power and generalizability to diverse or high-contrast lesions, which may pose a barrier to routine clinical deployment. Third, based on the current amount of data, using the training set and testing set with a single subject-level split of 8:2 may prevent data leakage from related slices but may not fully capture the variability between multiple random partitions. Due to limited training data, the model runs on 2D slices instead of 3D volumes, which may result in a loss of spatial continuity between slices. Fourth, acceleration was limited to an R of 3, reconstruction was performed on postprocessed quantitative maps, raw k-space data were not used, and weighted images generated from the quantitative maps were not evaluated. Additionally, the ground truth images have been reconstructed from a matrix of 320 by 256 to a matrix of 512 by 512, and thus, the model learns to reproduce the interpolated image content rather than recovering genuine high-frequency details that would be present in a natively HR acquisition. Finally, evaluation was performed based on quantitative metrics without introducing a qualitative visual assessment by radiologists.

Future Directions

Future research should focus on the following improvements: systematically compare the current SRGAN implementation with newer superresolution networks and quantify the individual contributions of perceptual and adversarial losses to quantitative accuracy, and compare them with non-DL baseline reconstruction methods to better illustrate the benefits of DL. A prospective cohort including demyelinating disease, tumor edema, and microhemorrhages will be enrolled, and the model will be fine-tuned or retrained if necessary before wider clinical deployment. To improve robustness, we will collect both k-space and image-domain data and develop a dual-domain joint reconstruction framework. Future work will explore 3D or slice-aware architectures based on richer training data and computing resources and attempt to use different data partitioning methods to validate generalization further. Finally, the weighted images derived from the superresolved quantitative maps will be generated and assessed for diagnostic quality.

Conclusions

In conclusion, accelerating synthetic MRI is pivotal for its broader clinical adoption in neuroimaging. In this study, we undertook the optimization and enhancement of images acquired through fast synthetic MRI via DL-based reconstruction techniques. Our efforts may result in a remarkable 50% reduction in clinical scan time, all the while achieving better image quality. Importantly, reconstructed T1 and PD maps show excellent agreement with reference scans, supporting their potential for reliable quantification, although T2 values exhibit a consistent underestimation at higher intensities. Despite this limitation, the overall strong correlations and low average biases suggest that using DL methods to mitigate biases in quantitative values is feasible. Furthermore, the shortened scan time not only enhances patient comfort but also reduces the likelihood of motion artifacts. This optimization streamlines the scanning workflow, minimizes redundancy, and maximizes the efficient use of MRI machines.

Acknowledgments

The authors are grateful to Dr Yi Zhu and Dr Rong Wei for their assistance with debugging the deep learning models. The authors also thank Dr Qi Sun for his help with the statistical methods.

Funding

This study was supported by the Space Medical Experiment Project of China Manned Space Program (HYZHXMH01005), Beijing Scholar 2015 (Zhenchang Wang), Beijing Friendship Hospital, Capital Medical University (YYZZ202351), and State Key Laboratory of Digestive Health (25ZX-QN05).

Data Availability

The datasets generated or analyzed during this study are available from the corresponding author on reasonable request.

Authors' Contributions

Conceptualization: YL (lead), HY (equal), PR (equal)

Data curation: YL

Formal analysis: YL (lead), ZZ (supporting)

Funding acquisition: ZW (lead), PR (equal)

Investigation: YL (lead), TZ (supporting)

Methodology: YL, ZZ (supporting), WL (supporting), TZ (supporting), LC (supporting), HL (supporting)

Project administration: PR

Resources: YL (lead), HL (supporting)

Supervision: HY (lead), ZW (equal), ZY (supporting)

Validation: YL

Visualization: YL

Writing – original draft: YL (lead), HY (supporting), HN (supporting)

Writing – review & editing: YL (lead), ZY (supporting), ZW (supporting), PR (supporting)

Conflicts of Interest

None declared.

Multimedia Appendix 1

The details of network structure and loss function.

DOCX File, 15 KB

  1. Cashmore MT, McCann AJ, Wastling SJ, McGrath C, Thornton J, Hall MG. Clinical quantitative MRI and the need for metrology. Br J Radiol. Apr 1, 2021;94(1120):20201215. [CrossRef] [Medline]
  2. Warntjes JBM, Dahlqvist O, Lundberg P. Novel method for rapid, simultaneous T1, T2*, and proton density quantification. Magn Reson Med. Mar 2007;57(3):528-537. [CrossRef] [Medline]
  3. Blystad I, Warntjes JBM, Smedby O, Landtblom AM, Lundberg P, Larsson EM. Synthetic MRI of the brain in a clinical setting. Acta Radiol. Dec 1, 2012;53(10):1158-1163. [CrossRef] [Medline]
  4. Gao W, Yang Q, Li X, et al. Synthetic MRI with quantitative mappings for identifying receptor status, proliferation rate, and molecular subtypes of breast cancer. Eur J Radiol. Mar 2022;148(157):110168. [CrossRef] [Medline]
  5. Zhang Z, Li S, Wang W, et al. Synthetic MRI for the quantitative and morphologic assessment of head and neck tumors: a preliminary study. Dentomaxillofac Radiol. Sep 2023;52(6):20230103. [CrossRef] [Medline]
  6. Ma L, Lian S, Liu H, et al. Diagnostic performance of synthetic magnetic resonance imaging in the prognostic evaluation of rectal cancer. Quant Imaging Med Surg. Jul 2022;12(7):3580-3591. [CrossRef] [Medline]
  7. Li M, Fu W, Ouyang L, et al. Potential clinical feasibility of synthetic MRI in bladder tumors: a comparative study with conventional MRI. Quant Imaging Med Surg. Aug 1, 2023;13(8):5109-5118. [CrossRef] [Medline]
  8. Bauer S, Markl M, Honal M, Jung BA. The effect of reconstruction and acquisition parameters for GRAPPA-based parallel imaging on the image quality. Magn Reson Med. Aug 2011;66(2):402-409. [CrossRef] [Medline]
  9. Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial networks. Commun ACM. Oct 22, 2020;63(11):139-144. [CrossRef]
  10. Mardani M, Gong E, Cheng JY, et al. Deep generative adversarial neural networks for compressive sensing MRI. IEEE Trans Med Imaging. Jan 2019;38(1):167-179. [CrossRef] [Medline]
  11. Quan TM, Nguyen-Duc T, Jeong WK. Compressed sensing MRI reconstruction using a generative adversarial network with a cyclic loss. IEEE Trans Med Imaging. Jun 2018;37(6):1488-1497. [CrossRef] [Medline]
  12. Yang G, Yu S, Dong H, et al. DAGAN: deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction. IEEE Trans Med Imaging. Jun 2018;37(6):1310-1321. [CrossRef] [Medline]
  13. Liu Y, Liu Y, Vanguri R, et al. 3D isotropic super-resolution prostate MRI using generative adversarial networks and unpaired multiplane slices. J Digit Imaging. Oct 2021;34(5):1199-1208. [CrossRef] [Medline]
  14. Su PY, Shih HJ, Xu JL. Rapid liver fibrosis evaluation using the UNet-ResNet50-32 × 4d model in magnetic resonance elastography: retrospective study. JMIR Med Inform. Oct 20, 2025;13:e80351. [CrossRef] [Medline]
  15. Ran M, Hu J, Chen Y, et al. Denoising of 3D magnetic resonance images using a residual encoder-decoder Wasserstein generative adversarial network. Med Image Anal. Jul 2019;55:165-180. [CrossRef] [Medline]
  16. Latif S, Asim M, Usman M, Qadir J, Rana R. Automating motion correction in multishot MRI using generative adversarial networks. arXiv. Preprint posted online on Nov 24, 2018. [CrossRef]
  17. Qiu S, Chen Y, Ma S, et al. Multiparametric mapping in the brain from conventional contrast-weighted images using deep learning. Magn Reson Med. Jan 2022;87(1):488-495. [CrossRef] [Medline]
  18. Liu Y, Niu H, Ren P, et al. Generation of quantification maps and weighted images from synthetic magnetic resonance imaging using deep learning network. Phys Med Biol. Jan 17, 2022;67(2). [CrossRef] [Medline]
  19. Don H, Supratak A, Mai L, et al. Tensorlayer: a versatile library for efficient deep learning development. Presented at: MM ’17: Proceedings of the 25th ACM international conference on Multimedia; Oct 23-27, 2017:1201-1204; Mountain View, CA. [CrossRef]
  20. SPM8. UCL: Department of Imaging Neuroscience. URL: https://www.fil.ion.ucl.ac.uk/spm/software/spm8/ [Accessed 2026-01-14]
  21. Zheng Z, Liu Y, Yin H, et al. Evaluating T1, T2 relaxation, and proton density in normal brain using synthetic MRI with fast imaging protocol. Magn Reson Med Sci. Oct 1, 2024;23(4):514-524. [CrossRef] [Medline]
  22. itk-SNAP. URL: http://www.itksnap.org/pmwiki/pmwiki.php [Accessed 2026-01-14]
  23. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process. Apr 2004;13(4):600-612. [CrossRef] [Medline]
  24. Saad MA, Bovik AC, Charrier C. Blind image quality assessment: a natural scene statistics approach in the DCT domain. IEEE Trans on Image Process. Aug 2012;21(8):3339-3352. [CrossRef]
  25. Kim E, Cho HH, Cho SH, et al. Accelerated synthetic MRI with deep learning-based reconstruction for pediatric neuroimaging. AJNR Am J Neuroradiol. Nov 2022;43(11):1653-1659. [CrossRef] [Medline]
  26. Zheng Z, Yang J, Zhang D, et al. The effect of scan parameters on T1, T2 relaxation times measured with multi-dynamic multi-echo sequence: a phantom study. Phys Eng Sci Med. Jun 2022;45(2):657-664. [CrossRef] [Medline]
  27. Hirano Y, Ishizaka K, Sugimori H, et al. Assessment of accuracy and repeatability of quantitative parameter mapping in MRI. Radiol Phys Technol. Dec 2024;17(4):918-928. [CrossRef] [Medline]
  28. McAllister A, Leach J, West H, Jones B, Zhang B, Serai S. Quantitative synthetic MRI in children: normative intracranial tissue segmentation values during development. AJNR Am J Neuroradiol. Dec 2017;38(12):2364-2372. [CrossRef] [Medline]
  29. Saccenti L, Andica C, Hagiwara A, et al. Brain tissue and myelin volumetric analysis in multiple sclerosis at 3T MRI with various in-plane resolutions using synthetic MRI. Neuroradiology. Nov 2019;61(11):1219-1227. [CrossRef] [Medline]
  30. Rolls ET, Huang CC, Lin CP, Feng J, Joliot M. Automated anatomical labelling atlas 3. Neuroimage. Feb 1, 2020;206:116189. [CrossRef] [Medline]
  31. Hagiwara A, Fujimoto K, Kamagata K, et al. Age-related changes in relaxation times, proton density, myelin, and tissue volumes in adult brain analyzed by 2-dimensional quantitative synthetic magnetic resonance imaging. Invest Radiol. Mar 1, 2021;56(3):163-172. [CrossRef] [Medline]
  32. Hagiwara A, Hori M, Cohen-Adad J, et al. Linearity, bias, intrascanner repeatability, and interscanner reproducibility of quantitative multidynamic multiecho sequence for rapid simultaneous relaxometry at 3 T: a validation study with a standardized phantom and healthy controls. Invest Radiol. Jan 2019;54(1):39-47. [CrossRef] [Medline]
  33. Zheng Z, Liu Y, Wang Z, Yin H, Zhang D, Yang J. Evaluating age-and gender-related changes in brain volumes in normal adult using synthetic magnetic resonance imaging. Brain Behav. Jul 2024;14(7):e3619. [CrossRef] [Medline]


AAL: anatomical automatic labeling
CV: coefficient of variation
DL: deep learning
GM: gray matter
HR: high resolution
LR: low resolution
MRI: magnetic resonance imaging
NIQE: naturalness image quality evaluator
PD: proton density
PSNR: peak signal-to-noise ratio
ReLU: rectified linear unit
ROI: region of interest
SRGAN: superresolution generative adversarial network
SSIM: structural similarity image measure
TOST: two 1-sided tests
WM: white matter


Edited by Andrew Coristine; submitted 20.Jun.2025; peer-reviewed by Bernd Feige, Jianfeng Bao; accepted 23.Dec.2025; published 23.Jan.2026.

Copyright

© Yawen Liu, Hongxia Yin, Zuofeng Zheng, Wenjuan Liu, Tingting Zhang, Linkun Cai, Haijun Niu, Han Lv, Zhenghan Yang, Zhenchang Wang, Pengling Ren. Originally published in JMIR Medical Informatics (https://medinform.jmir.org), 23.Jan.2026.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on https://medinform.jmir.org/, as well as this copyright and license information must be included.