<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.0 20040830//EN" "http://dtd.nlm.nih.gov/publishing/2.0/journalpublishing.dtd">
<?covid-19-tdm?>
<article xmlns:xlink="http://www.w3.org/1999/xlink" article-type="review-article" dtd-version="2.0">
  <front>
    <journal-meta>
      <journal-id journal-id-type="publisher-id">JMI</journal-id>
      <journal-id journal-id-type="nlm-ta">JMIR Med Inform</journal-id>
      <journal-title>JMIR Medical Informatics</journal-title>
      <issn pub-type="epub">2291-9694</issn>
      <publisher>
        <publisher-name>JMIR Publications</publisher-name>
        <publisher-loc>Toronto, Canada</publisher-loc>
      </publisher>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="publisher-id">v10i6e37365</article-id>
      <article-id pub-id-type="pmid">35709336</article-id>
      <article-id pub-id-type="doi">10.2196/37365</article-id>
      <article-categories>
        <subj-group subj-group-type="heading">
          <subject>Review</subject>
        </subj-group>
        <subj-group subj-group-type="article-type">
          <subject>Review</subject>
        </subj-group>
      </article-categories>
      <title-group>
        <article-title>Combating COVID-19 Using Generative Adversarial Networks and Artificial Intelligence for Medical Images: Scoping Review</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="editor">
          <name>
            <surname>Lovis</surname>
            <given-names>Christian</given-names>
          </name>
        </contrib>
      </contrib-group>
      <contrib-group>
        <contrib contrib-type="reviewer">
          <name>
            <surname>Khan</surname>
            <given-names>Shahid</given-names>
          </name>
        </contrib>
        <contrib contrib-type="reviewer">
          <name>
            <surname>Gabashvili</surname>
            <given-names>Irene</given-names>
          </name>
        </contrib>
      </contrib-group>
      <contrib-group>
        <contrib id="contrib1" contrib-type="author">
          <name name-style="western">
            <surname>Ali</surname>
            <given-names>Hazrat</given-names>
          </name>
          <degrees>PhD</degrees>
          <xref rid="aff1" ref-type="aff">1</xref>
          <ext-link ext-link-type="orcid">https://orcid.org/0000-0003-3058-5794</ext-link>
        </contrib>
        <contrib id="contrib2" contrib-type="author" corresp="yes">
          <name name-style="western">
            <surname>Shah</surname>
            <given-names>Zubair</given-names>
          </name>
          <degrees>PhD</degrees>
          <xref rid="aff1" ref-type="aff">1</xref>
          <address>
            <institution>College of Science and Engineering</institution>
            <institution>Hamad Bin Khalifa University</institution>
            <addr-line>Al Luqta St</addr-line>
            <addr-line>Ar-Rayyan</addr-line>
            <addr-line>Doha, 34110</addr-line>
            <country>Qatar</country>
            <phone>974 50744851</phone>
            <email>zshah@hbku.edu.qa</email>
          </address>
          <ext-link ext-link-type="orcid">https://orcid.org/0000-0001-7389-3274</ext-link>
        </contrib>
      </contrib-group>
      <aff id="aff1">
        <label>1</label>
        <institution>College of Science and Engineering</institution>
        <institution>Hamad Bin Khalifa University</institution>
        <addr-line>Doha</addr-line>
        <country>Qatar</country>
      </aff>
      <author-notes>
        <corresp>Corresponding Author: Zubair Shah <email>zshah@hbku.edu.qa</email></corresp>
      </author-notes>
      <pub-date pub-type="collection">
        <month>6</month>
        <year>2022</year>
      </pub-date>
      <pub-date pub-type="epub">
        <day>29</day>
        <month>6</month>
        <year>2022</year>
      </pub-date>
      <volume>10</volume>
      <issue>6</issue>
      <elocation-id>e37365</elocation-id>
      <history>
        <date date-type="received">
          <day>17</day>
          <month>2</month>
          <year>2022</year>
        </date>
        <date date-type="rev-request">
          <day>2</day>
          <month>3</month>
          <year>2022</year>
        </date>
        <date date-type="rev-recd">
          <day>6</day>
          <month>3</month>
          <year>2022</year>
        </date>
        <date date-type="accepted">
          <day>11</day>
          <month>3</month>
          <year>2022</year>
        </date>
      </history>
      <copyright-statement>©Hazrat Ali, Zubair Shah. Originally published in JMIR Medical Informatics (https://medinform.jmir.org), 29.06.2022.</copyright-statement>
      <copyright-year>2022</copyright-year>
      <license license-type="open-access" xlink:href="https://creativecommons.org/licenses/by/4.0/">
        <p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on https://medinform.jmir.org/, as well as this copyright and license information must be included.</p>
      </license>
      <self-uri xlink:href="https://medinform.jmir.org/2022/6/e37365" xlink:type="simple"/>
      <abstract>
        <sec sec-type="background">
          <title>Background</title>
          <p>Research on the diagnosis of COVID-19 using lung images is limited by the scarcity of imaging data. Generative adversarial networks (GANs) are popular for synthesis and data augmentation. GANs have been explored for data augmentation to enhance the performance of artificial intelligence (AI) methods for the diagnosis of COVID-19 within lung computed tomography (CT) and X-ray images. However, the role of GANs in overcoming data scarcity for COVID-19 is not well understood.</p>
        </sec>
        <sec sec-type="objective">
          <title>Objective</title>
          <p>This review presents a comprehensive study on the role of GANs in addressing the challenges related to COVID-19 data scarcity and diagnosis. It is the first review that summarizes different GAN methods and lung imaging data sets for COVID-19. It attempts to answer the questions related to applications of GANs, popular GAN architectures, frequently used image modalities, and the availability of source code.</p>
        </sec>
        <sec sec-type="methods">
          <title>Methods</title>
          <p>A search was conducted on 5 databases, namely PubMed, IEEEXplore, Association for Computing Machinery (ACM) Digital Library, Scopus, and Google Scholar. The search was conducted from October 11-13, 2021. The search was conducted using intervention keywords, such as “generative adversarial networks” and “GANs,” and application keywords, such as “COVID-19” and “coronavirus.” The review was performed following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines for systematic and scoping reviews. Only those studies were included that reported GAN-based methods for analyzing chest X-ray images, chest CT images, and chest ultrasound images. Any studies that used deep learning methods but did not use GANs were excluded. No restrictions were imposed on the country of publication, study design, or outcomes. Only those studies that were in English and were published from 2020 to 2022 were included. No studies before 2020 were included.</p>
        </sec>
        <sec sec-type="results">
          <title>Results</title>
          <p>This review included 57 full-text studies that reported the use of GANs for different applications in COVID-19 lung imaging data. Most of the studies (n=42, 74%) used GANs for data augmentation to enhance the performance of AI techniques for COVID-19 diagnosis. Other popular applications of GANs were segmentation of lungs and superresolution of lung images. The cycleGAN and the conditional GAN were the most commonly used architectures, used in 9 studies each. In addition, 29 (51%) studies used chest X-ray images, while 21 (37%) studies used CT images for the training of GANs. For the majority of the studies (n=47, 82%), the experiments were conducted and results were reported using publicly available data. A secondary evaluation of the results by radiologists/clinicians was reported by only 2 (4%) studies.</p>
        </sec>
        <sec sec-type="conclusions">
          <title>Conclusions</title>
          <p>Studies have shown that GANs have great potential to address the data scarcity challenge for lung images in COVID-19. Data synthesized with GANs have been helpful to improve the training of the convolutional neural network (CNN) models trained for the diagnosis of COVID-19. In addition, GANs have also contributed to enhancing the CNNs’ performance through the superresolution of the images and segmentation. This review also identified key limitations of the potential transformation of GAN-based methods in clinical applications.</p>
        </sec>
      </abstract>
      <kwd-group>
        <kwd>augmentation</kwd>
        <kwd>artificial intelligence</kwd>
        <kwd>COVID-19</kwd>
        <kwd>diagnosis</kwd>
        <kwd>generative adversarial networks</kwd>
        <kwd>diagnostic</kwd>
        <kwd>lung image</kwd>
        <kwd>imaging</kwd>
        <kwd>data augmentation</kwd>
        <kwd>X-ray</kwd>
        <kwd>CT scan</kwd>
        <kwd>data scarcity</kwd>
        <kwd>image data</kwd>
        <kwd>neural network</kwd>
        <kwd>clinical informatics</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec sec-type="introduction">
      <title>Introduction</title>
      <sec>
        <title>Background</title>
        <p>In December 2019, COVID-19 broke out and spread at an unprecedented rate, given the highly contagious nature of the virus. As a result, the World Health Organization (WHO) declared it a global pandemic in March 2020 [<xref ref-type="bibr" rid="ref1">1</xref>]. Therefore, a response to combat the spread through speedy diagnosis became the most critical need of the time. A common method for diagnosing COVID-19 is the use of a real-time reverse transcription–polymerase chain reaction (RT-PCR) test. However, with the increasing number of cases worldwide, the health care sector was overloaded as it became challenging to cope with the requirements of the tests with the available testing facilities. In addition, research has shown that RT-PCR may result in false negatives or fluctuating results [<xref ref-type="bibr" rid="ref2">2</xref>]. Hence, diagnosis through computed tomography (CT) and X-ray images of lungs may supplement performance. Motivated by this need, alternative methods, such as automatic diagnosis of COVID-19 from lung images, were explored and encouraged. In this regard, it is well understood that artificial intelligence (AI) techniques could help inspect chest CTs and X-rays within seconds and augment the public health care sector. The use of properly trained AI models for diagnosis of COVID-19 is promising for scaling up the capacity and accelerating the process as computers are, in general, faster than humans in computations.</p>
        <p>Many AI and medical imaging methods were explored to provide support in the early diagnosis of COVID-19, for example, AI for COVID-19 [<xref ref-type="bibr" rid="ref3">3</xref>-<xref ref-type="bibr" rid="ref5">5</xref>], machine learning for COVID-19 [<xref ref-type="bibr" rid="ref6">6</xref>], and data science for COVID-19 [<xref ref-type="bibr" rid="ref7">7</xref>]. However, AI techniques rely on large data. For example, training a convolutional neural network (CNN) to perform classification of COVID-19 versus normal chest X-ray images requires training of the CNN with a large number of chest X-ray images both for COVID-19 and for normal cases. Since the diagnosis of COVID-19 requires studying of lung CT or X-ray images, the availability of lung imaging data is vital to develop medical imaging methods. However, the lack of data on COVID-19 hampered the initial progress in developing these methods to combat COVID-19.</p>
        <p>Many early attempts were made to collect imaging data for lungs infected with COVID-19—specifically CT and X-ray images either through a private collection in hospitals or through crowdsourcing using public platforms. In parallel, many studies have explored the use of generative adversarial networks (GANs) to generate synthetic imaging data that can improve the training of AI models to diagnose COVID-19.</p>
        <p>GANs are a family of deep learning models that consist of 2 neural networks trained in an adversarial fashion [<xref ref-type="bibr" rid="ref8">8</xref>-<xref ref-type="bibr" rid="ref15">15</xref>]. The 2 neural networks, namely the generator and the discriminator, attempt to minimize their losses, while maximizing the loss of the other. This training mechanism improves the overall learning task of the GAN model, particularly for generating data. GANs have recently been studied for computer vision and medical imaging tasks, such as image generation, superresolution, and segmentation [<xref ref-type="bibr" rid="ref9">9</xref>,<xref ref-type="bibr" rid="ref10">10</xref>]. Given the significant potential of GANs in medical imaging, it was intuitive that many researchers were tempted to explore the use of GANs for data augmentation of imaging data on COVID-19. In addition, some researchers also used GANs for segmentation and superresolution of lung images.</p>
        <p>This scoping review focuses on providing a comprehensive review of the GAN-based methods used to combat COVID-19. Specifically, it covers the studies where GANs have been used for lung CT and X-ray images to diagnose COVID-19 or to enhance the performance of CNNs for the diagnosis of COVID-19 (eg, by data augmentation or superresolution).</p>
      </sec>
      <sec>
        <title>Research Problem</title>
        <p>GANs have gained the attention of the medical imaging research community. As the COVID-19 pandemic continued to grow in 2020 and 2021, the research community faced a significant challenge due to the scarcity of medical imaging data on COVID-19 that can be used to train AI models (eg, CNN) to perform COVID-19 diagnosis automatically. Given the popularity of GANs for image synthesis, researchers turned to exploring the use of GANs for data augmentation of lung radiology images. Many studies were conducted to use different variants of GANs for data augmentation of lung CT images and lung X-ray images. Similarly, a few studies also used GANs for the diagnosis of COVID-19 from lung radiology images. However, to the best of our knowledge, there is no review on the role of GANs in addressing the challenges related to COVID-19 data scarcity and diagnosis. The following research questions related to COVID-19 imaging data were considered for this review:</p>
        <p>What were the common applications of GANs proposed for challenges related to COVID-19?</p>
        <list list-type="bullet">
          <list-item>
            <p>Which architectures of GANs are most commonly applied for data augmentation tasks related to COVID-19?</p>
          </list-item>
          <list-item>
            <p>Which imaging modality is the popular choice for the diagnosis of COVID-19?</p>
          </list-item>
          <list-item>
            <p>What were the most commonly used data sets of CT and X-ray images for COVID-19?</p>
          </list-item>
          <list-item>
            <p>What studies were conducted with open-source code to reproduce the results?</p>
          </list-item>
          <list-item>
            <p>What studies were conducted and presented to radiology experts for evaluation of the suitability toward future use in clinical applications?</p>
          </list-item>
        </list>
        <p>The results of this review will be helpful for researchers and professionals in the medical imaging and health care domain who are considering using GAN-based methods to address challenges related to COVID-19 imaging data and to address the challenge in improving automatic diagnosis using radiology images.</p>
      </sec>
    </sec>
    <sec sec-type="methods">
      <title>Methods</title>
      <sec>
        <title>Study Design</title>
        <p>In this work, a scoping review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines [<xref ref-type="bibr" rid="ref16">16</xref>]. The methods for performing the study are described next.</p>
      </sec>
      <sec>
        <title>Search Strategy</title>
        <sec>
          <title>Search Sources</title>
          <p>A search was conducted from October 11-13, 2021. The search was performed on the following 5 databases: PubMed, IEEEXplore, Association for Computing Machinery (ACM) Digital Library, Scopus, and Google Scholar. In the case of Google Scholar, only the first 99 results were retained as the results beyond 99 items were highly irrelevant to the scope of the study. Similarly, in the case of ACM Digital Library, the first 100 results were retained as a lack of relevancy to the study was obvious in results beyond 100.</p>
        </sec>
        <sec>
          <title>Search Terms</title>
          <p>The search terms used in this study were chosen from the literature with guidance from experts in the field. The terms were chosen based on the intervention (eg, “generative adversarial networks,” “GANs,” “cycleGANs”) and the target application (eg, “COVID-“19”, “coronavirus,” “corona pandemic”). The exact search strings used in the search for this study are available in <xref ref-type="supplementary-material" rid="app1">Multimedia Appendix 1</xref>.</p>
        </sec>
      </sec>
      <sec>
        <title>Search Eligibility Criteria</title>
        <p>This study focused on the applications of GANs in analyzing radiology images of lungs for COVID-19, used for any purpose such as data augmentation or synthesis, diagnosis, superresolution, and prognosis. Only those studies were included that reported GAN-based methods for analyzing chest X-ray images, chest CT images, and chest ultrasound images. Studies that reported GAN-based methods for analyzing nonlung images were removed. Any studies that used deep learning methods but did not use GANs were also excluded. Studies reporting GANs for nonimaging data were also excluded. To provide a list of reliable studies, only peer-reviewed articles, conference papers, and book chapters were included. Preprints, conference abstracts, short letters, and commentaries were excluded. Similarly, review articles were also excluded. No restrictions were imposed on the country of publication, study design, or outcomes. Studies that were written in English and were published from 2020 to 2022 were included. No studies before 2020 were included.</p>
      </sec>
      <sec>
        <title>Study Selection</title>
        <p>Two reviewers (authors HA and ZS) screened the titles and abstracts of the search results. Initial screening by the 2 reviewers was performed independently. Disagreement occurred for only 9 articles. The disagreement was resolved through mutual discussion and consensus. For measuring the disagreement, Cohen κ [<xref ref-type="bibr" rid="ref17">17</xref>] was calculated to be 0.89, which shows good agreement between the 2 independent reviewers. <xref ref-type="supplementary-material" rid="app2">Multimedia Appendix 2</xref> shows the matrix for the agreement between the 2 independent reviewers.</p>
      </sec>
      <sec>
        <title>Data Extraction</title>
        <p><xref ref-type="supplementary-material" rid="app3">Multimedia Appendix 3</xref> shows the form for extraction of the key characteristics. The form was pilot-tested and refined in 2 rounds, first by data extraction for 5 studies and then by data extraction for another 5 studies. This refinement of the form ensured that only relevant data were extracted from the studies. The 2 reviewers (HA and ZS) extracted the data from the included studies, related to the GAN-based method, applications, and data sets. Any disagreement between the reviewers was resolved through mutual consensus and discussions. As the disagreements at the study selection stage were resolved through careful and lengthy discussions, the disagreement at the data extraction was only minor.</p>
      </sec>
      <sec>
        <title>Data Synthesis</title>
        <p>After extraction of the data from the full text of the identified studies, a narrative approach was used to synthesize the data. The use of GAN-based methods was classified in terms of the application of GANs (eg, augmentation, segmentation of lungs); the type of GAN architecture, if reported (eg, conditional GAN or cycleGAN); and the modality of the imaging data for which the GAN was used (eg, CT or X-ray imaging). Similarly, the studies were classified based on the availability of the data set (eg, public or private), the size of the data set (eg, the number of images in the original images and the number of images after augmentation with the GAN, if applicable), and the proportion of the training and test sets as well as the type of cross-validation. The data synthesis was managed and performed using Microsoft Excel.</p>
      </sec>
    </sec>
    <sec sec-type="results">
      <title>Results</title>
      <sec>
        <title>Search Results</title>
        <p>From 5 online databases, a total of 348 studies were retrieved (see <xref rid="figure1" ref-type="fig">Figure 1</xref>). Of the 348 studies, 81 (23.3%) duplicates were removed. The titles and abstracts of the remaining 267 (76.7%) studies were carefully screened as per the criteria of inclusion and exclusion. The screening of the titles and abstracts resulted in the exclusion of 208 (77.9%) studies (see <xref rid="figure1" ref-type="fig">Figure 1</xref> for reasons of exclusion). After the full-text reading of the remaining 59 (22.1%) studies, 2 (3%) studies were excluded following the inclusion/exclusion criteria. Finally, a total of 57 (97%) studies were included in this review. No additional studies were found through reference list checking. As per the yearwise publication, 15 (26%) of 57 studies were published in 2020 and 41 (72%) of 57 were published in 2021.</p>
        <fig id="figure1" position="float">
          <label>Figure 1</label>
          <caption>
            <p>PRISMA-ScR flowchart for the search outcomes and selection of studies. GAN: generative adversarial network; PRISMA-ScR: Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews.</p>
          </caption>
          <graphic xlink:href="medinform_v10i6e37365_fig1.png" alt-version="no" mimetype="image" position="float" xlink:type="simple"/>
        </fig>
      </sec>
      <sec>
        <title>Demographics of the Included Studies</title>
        <p>Among the included studies (N=57), 37 (65%) studies were published articles in peer-reviewed journals, 18 (32%) studies were published in conference proceedings, and 2 (4%) studies were published as book chapters. No thesis publication was found relevant to the scope of this review. Around one-fourth of the studies (n=15, 26%) were published in 2020. Most of the studies were published in 2021 (n=41, 72%). The included studies were published in 14 countries. The largest number of publications were from China (n=12, 21%), followed by India (n=10, 18%). Both the United States and Egypt published the same number of studies (n=6, 11%, each). The characteristics are summarized in <xref ref-type="table" rid="table1">Table 1</xref> and <xref ref-type="supplementary-material" rid="app4">Multimedia Appendix 4</xref>. <xref rid="figure2" ref-type="fig">Figure 2</xref> (see [<xref ref-type="bibr" rid="ref18">18</xref>-<xref ref-type="bibr" rid="ref74">74</xref>]) shows the demographics of the included studies, along with the modality of the chest images used.</p>
        <table-wrap position="float" id="table1">
          <label>Table 1</label>
          <caption>
            <p>Characteristics of the included studies (N=57). Demographics are shown for type of publication, country of publication, and year of publication.</p>
          </caption>
          <table width="1000" cellpadding="5" cellspacing="0" border="1" rules="groups" frame="hsides">
            <col width="30"/>
            <col width="470"/>
            <col width="500"/>
            <thead>
              <tr valign="top">
                <td colspan="2">Characteristics</td>
                <td>Studies, n (%)</td>
              </tr>
            </thead>
            <tbody>
              <tr valign="top">
                <td colspan="3">
                  <bold>Publication type</bold>
                </td>
              </tr>
              <tr valign="top">
                <td rowspan="3">
                  <break/>
                </td>
                <td>Journal</td>
                <td>37 (65)</td>
              </tr>
              <tr valign="top">
                <td>Conference</td>
                <td>18 (32)</td>
              </tr>
              <tr valign="top">
                <td>Book chapter</td>
                <td>2 (4)</td>
              </tr>
              <tr valign="top">
                <td colspan="3">
                  <bold>Country</bold>
                </td>
              </tr>
              <tr valign="top">
                <td rowspan="14">
                  <break/>
                </td>
                <td>China</td>
                <td>12 (21)</td>
              </tr>
              <tr valign="top">
                <td>India</td>
                <td>10 (18)</td>
              </tr>
              <tr valign="top">
                <td>United States</td>
                <td>6 (11)</td>
              </tr>
              <tr valign="top">
                <td>Egypt</td>
                <td>6 (11)</td>
              </tr>
              <tr valign="top">
                <td>Canada</td>
                <td>4 (7)</td>
              </tr>
              <tr valign="top">
                <td>Spain</td>
                <td>3 (5)</td>
              </tr>
              <tr valign="top">
                <td>Malaysia</td>
                <td>2 (4)</td>
              </tr>
              <tr valign="top">
                <td>Turkey</td>
                <td>2 (4)</td>
              </tr>
              <tr valign="top">
                <td>Pakistan</td>
                <td>2 (4)</td>
              </tr>
              <tr valign="top">
                <td>Vietnam</td>
                <td>1 (2)</td>
              </tr>
              <tr valign="top">
                <td>Mexico</td>
                <td>1 (2)</td>
              </tr>
              <tr valign="top">
                <td>South Korea</td>
                <td>1 (2)</td>
              </tr>
              <tr valign="top">
                <td>Philippines</td>
                <td>1 (2)</td>
              </tr>
              <tr valign="top">
                <td>Israel</td>
                <td>1 (2)</td>
              </tr>
              <tr valign="top">
                <td colspan="3">
                  <bold>Year of publication</bold>
                </td>
              </tr>
              <tr valign="top">
                <td rowspan="3">
                  <break/>
                </td>
                <td>2020</td>
                <td>15 (26)</td>
              </tr>
              <tr valign="top">
                <td>2021</td>
                <td>41 (72)</td>
              </tr>
              <tr valign="top">
                <td>2022</td>
                <td>1 (2)</td>
              </tr>
            </tbody>
          </table>
        </table-wrap>
        <fig id="figure2" position="float">
          <label>Figure 2</label>
          <caption>
            <p>Characteristics of the included studies showing the publication type, country of publication, and modality of data. The number of studies is reflected by the size of the terminal node. The numbers S1-S57 refer to the included studies. CT: computed tomography.</p>
          </caption>
          <graphic xlink:href="medinform_v10i6e37365_fig2.png" alt-version="no" mimetype="image" position="float" xlink:type="simple"/>
        </fig>
      </sec>
      <sec>
        <title>Application of the Studies</title>
        <p>As shown in <xref ref-type="table" rid="table2">Table 2</xref>, the included studies have reported 5 different tasks being addressed: augmentation (data augmentation), diagnosis of COVID-19, prognosis, segmentation (to identify the lung region), and diagnosis of lung diseases. As the diagnosis of COVID-19 using medical imaging has been a priority since the pandemic started, 39 (68%) of 57 studies reported the diagnosis of COVID-19 as the main focus of their work [<xref ref-type="bibr" rid="ref19">19</xref>-<xref ref-type="bibr" rid="ref21">21</xref>, <xref ref-type="bibr" rid="ref23">23</xref>-<xref ref-type="bibr" rid="ref33">33</xref>, <xref ref-type="bibr" rid="ref35">35</xref>-<xref ref-type="bibr" rid="ref37">37</xref>, <xref ref-type="bibr" rid="ref39">39</xref>, <xref ref-type="bibr" rid="ref41">41</xref>, <xref ref-type="bibr" rid="ref42">42</xref>, <xref ref-type="bibr" rid="ref44">44</xref>, <xref ref-type="bibr" rid="ref46">46</xref>, <xref ref-type="bibr" rid="ref50">50</xref>, <xref ref-type="bibr" rid="ref52">52</xref>, <xref ref-type="bibr" rid="ref53">53</xref>, <xref ref-type="bibr" rid="ref55">55</xref>, <xref ref-type="bibr" rid="ref56">56</xref>, <xref ref-type="bibr" rid="ref58">58</xref>-<xref ref-type="bibr" rid="ref60">60</xref>, <xref ref-type="bibr" rid="ref63">63</xref>-<xref ref-type="bibr" rid="ref69">69</xref>, <xref ref-type="bibr" rid="ref71">71</xref>, <xref ref-type="bibr" rid="ref72">72</xref>]. In addition, 9 (16%) studies reported data augmentation as the main task addressed in the work [<xref ref-type="bibr" rid="ref18">18</xref>,<xref ref-type="bibr" rid="ref43">43</xref>,<xref ref-type="bibr" rid="ref45">45</xref>,<xref ref-type="bibr" rid="ref49">49</xref>,<xref ref-type="bibr" rid="ref54">54</xref>,<xref ref-type="bibr" rid="ref61">61</xref>,<xref ref-type="bibr" rid="ref62">62</xref>], 1 (2%) study reported prognosis of COVID-19 [<xref ref-type="bibr" rid="ref22">22</xref>], 3 (5%) studies reported segmentation of lungs [<xref ref-type="bibr" rid="ref34">34</xref>,<xref ref-type="bibr" rid="ref51">51</xref>,<xref ref-type="bibr" rid="ref57">57</xref>], and 1 (2%) study reported diagnosis of multiple lung diseases [<xref ref-type="bibr" rid="ref47">47</xref>].</p>
        <table-wrap position="float" id="table2">
          <label>Table 2</label>
          <caption>
            <p>Applications of using GAN<sup>a</sup>-based methods and types of GANs.</p>
          </caption>
          <table width="1000" cellpadding="5" cellspacing="0" border="1" rules="groups" frame="hsides">
            <col width="30"/>
            <col width="470"/>
            <col width="500"/>
            <thead>
              <tr valign="top">
                <td colspan="2">Applications</td>
                <td>Studies (N=57), n (%)</td>
              </tr>
            </thead>
            <tbody>
              <tr valign="top">
                <td colspan="3">
                  <bold>Applications addressed in the studies</bold>
                </td>
              </tr>
              <tr valign="top">
                <td rowspan="7">
                  <break/>
                </td>
                <td>Diagnosis</td>
                <td>39 (68)</td>
              </tr>
              <tr valign="top">
                <td>Data augmentation</td>
                <td>9 (16)</td>
              </tr>
              <tr valign="top">
                <td>Segmentation+diagnosis</td>
                <td>3 (5)</td>
              </tr>
              <tr valign="top">
                <td>Segmentation</td>
                <td>3 (5)</td>
              </tr>
              <tr valign="top">
                <td>Diagnosis of lung disease</td>
                <td>1 (2)</td>
              </tr>
              <tr valign="top">
                <td>Prognosis</td>
                <td>1 (2)</td>
              </tr>
              <tr valign="top">
                <td>Prognosis+diagnosis</td>
                <td>1 (2)</td>
              </tr>
              <tr valign="top">
                <td colspan="3">
                  <bold>Applications of using GANs</bold>
                </td>
              </tr>
              <tr valign="top">
                <td rowspan="7">
                  <break/>
                </td>
                <td>Augmentation</td>
                <td>42 (74)</td>
              </tr>
              <tr valign="top">
                <td>Diagnosis</td>
                <td>5 (9)</td>
              </tr>
              <tr valign="top">
                <td>Superresolution</td>
                <td>3 (5)</td>
              </tr>
              <tr valign="top">
                <td>Segmentation</td>
                <td>3 (5)</td>
              </tr>
              <tr valign="top">
                <td>Feature extraction</td>
                <td>2 (4)</td>
              </tr>
              <tr valign="top">
                <td>Prognosis</td>
                <td>1 (2)</td>
              </tr>
              <tr valign="top">
                <td>3D synthesis</td>
                <td>1 (2)</td>
              </tr>
              <tr valign="top">
                <td colspan="3">
                  <bold>Type of GAN used</bold>
                </td>
              </tr>
              <tr valign="top">
                <td rowspan="10">
                  <break/>
                </td>
                <td>GAN</td>
                <td>17 (30)</td>
              </tr>
              <tr valign="top">
                <td>CycleGAN</td>
                <td>9 (16)</td>
              </tr>
              <tr valign="top">
                <td>Conditional GAN</td>
                <td>9 (16)</td>
              </tr>
              <tr valign="top">
                <td>Deep convolutional GAN</td>
                <td>4 (7)</td>
              </tr>
              <tr valign="top">
                <td>Auxiliary classifier GAN</td>
                <td>4 (7)</td>
              </tr>
              <tr valign="top">
                <td>Superresolution GAN</td>
                <td>2 (4)</td>
              </tr>
              <tr valign="top">
                <td>3D conditional GAN</td>
                <td>2 (4)</td>
              </tr>
              <tr valign="top">
                <td>BiGAN</td>
                <td>1 (2)</td>
              </tr>
              <tr valign="top">
                <td>Random GAN</td>
                <td>1 (2)</td>
              </tr>
              <tr valign="top">
                <td>Pix2pix GAN</td>
                <td>1 (2)</td>
              </tr>
            </tbody>
          </table>
          <table-wrap-foot>
            <fn id="table2fn1">
              <p><sup>a</sup>GAN: generative adversarial network.</p>
            </fn>
          </table-wrap-foot>
        </table-wrap>
        <p>The majority of the studies used GANs to augment the data, where they reported the use of GANs to increase the data set size. Specifically, 42 (74%) studies used GAN-based methods for data augmentation [<xref ref-type="bibr" rid="ref18">18</xref>, <xref ref-type="bibr" rid="ref21">21</xref>, <xref ref-type="bibr" rid="ref23">23</xref>-<xref ref-type="bibr" rid="ref29">29</xref>, <xref ref-type="bibr" rid="ref31">31</xref>-<xref ref-type="bibr" rid="ref36">36</xref>, <xref ref-type="bibr" rid="ref38">38</xref>-<xref ref-type="bibr" rid="ref43">43</xref>, <xref ref-type="bibr" rid="ref45">45</xref>, <xref ref-type="bibr" rid="ref46">46</xref>, <xref ref-type="bibr" rid="ref48">48</xref>, <xref ref-type="bibr" rid="ref50">50</xref>, <xref ref-type="bibr" rid="ref52">52</xref>-<xref ref-type="bibr" rid="ref56">56</xref>, <xref ref-type="bibr" rid="ref59">59</xref>-<xref ref-type="bibr" rid="ref67">67</xref>, <xref ref-type="bibr" rid="ref71">71</xref>, <xref ref-type="bibr" rid="ref73">73</xref>, <xref ref-type="bibr" rid="ref74">74</xref>]. The augmented data were then used to improve the training of different CNNs to diagnose COVID-19. In addition, 3 (5%) studies used GANs for segmentation of the lung region within the chest radiology images [<xref ref-type="bibr" rid="ref37">37</xref>,<xref ref-type="bibr" rid="ref51">51</xref>,<xref ref-type="bibr" rid="ref57">57</xref>], 3 (5%) studies used GANs for superresolution to improve the quality of the images before using them for diagnosis purposes [<xref ref-type="bibr" rid="ref30">30</xref>,<xref ref-type="bibr" rid="ref44">44</xref>,<xref ref-type="bibr" rid="ref68">68</xref>], 5 (9%) studies used GANs for the diagnosis of COVID-19 [<xref ref-type="bibr" rid="ref20">20</xref>,<xref ref-type="bibr" rid="ref58">58</xref>,<xref ref-type="bibr" rid="ref69">69</xref>,<xref ref-type="bibr" rid="ref70">70</xref>,<xref ref-type="bibr" rid="ref72">72</xref>], 2 (4%) studies used GANs for feature extraction from images [<xref ref-type="bibr" rid="ref19">19</xref>,<xref ref-type="bibr" rid="ref47">47</xref>], and 1 (2%) study used a GAN-based method for prognosis of COVID-19 [<xref ref-type="bibr" rid="ref22">22</xref>]. The prevalent mode of imaging is the use of 2D imaging data, and 1 (2%) study reported a GAN-based method for synthesizing 3D data [<xref ref-type="bibr" rid="ref49">49</xref>]. <xref rid="figure3" ref-type="fig">Figure 3</xref> (see [<xref ref-type="bibr" rid="ref18">18</xref>-<xref ref-type="bibr" rid="ref74">74</xref>]) shows the mapping of the applications of GAN-based methods for all the included studies.</p>
        <p>Different variants have been proposed for GAN architectures since their inception. The most common type of GAN used in these studies was the cycleGAN, used in 9 (16%) studies [<xref ref-type="bibr" rid="ref29">29</xref>,<xref ref-type="bibr" rid="ref35">35</xref>,<xref ref-type="bibr" rid="ref36">36</xref>,<xref ref-type="bibr" rid="ref42">42</xref>,<xref ref-type="bibr" rid="ref46">46</xref>,<xref ref-type="bibr" rid="ref54">54</xref>,<xref ref-type="bibr" rid="ref56">56</xref>,<xref ref-type="bibr" rid="ref70">70</xref>,<xref ref-type="bibr" rid="ref74">74</xref>]. The cycleGAN is an image translation GAN that does not require paired data to transform images from one domain to another. Other popular types of GANs were conditional GAN used by 9 (16%) studies [<xref ref-type="bibr" rid="ref18">18</xref>,<xref ref-type="bibr" rid="ref22">22</xref>,<xref ref-type="bibr" rid="ref24">24</xref>,<xref ref-type="bibr" rid="ref25">25</xref>,<xref ref-type="bibr" rid="ref33">33</xref>,<xref ref-type="bibr" rid="ref37">37</xref>,<xref ref-type="bibr" rid="ref41">41</xref>,<xref ref-type="bibr" rid="ref57">57</xref>,<xref ref-type="bibr" rid="ref60">60</xref>], deep convolutional GAN used by 4 (7%) studies [<xref ref-type="bibr" rid="ref21">21</xref>,<xref ref-type="bibr" rid="ref38">38</xref>,<xref ref-type="bibr" rid="ref43">43</xref>,<xref ref-type="bibr" rid="ref67">67</xref>], and auxiliary classifier GAN used by 4 (7%) studies [<xref ref-type="bibr" rid="ref32">32</xref>,<xref ref-type="bibr" rid="ref40">40</xref>,<xref ref-type="bibr" rid="ref55">55</xref>,<xref ref-type="bibr" rid="ref69">69</xref>]. The superresolution GAN was used by 2 (4%) studies [<xref ref-type="bibr" rid="ref44">44</xref>,<xref ref-type="bibr" rid="ref68">68</xref>], and 1 (2%) study reported the use of multiple GANs, namely Wassertein GAN, auxiliary classifier GAN, and deep convolutional GAN, and compared their performances for improving the quality of images [<xref ref-type="bibr" rid="ref31">31</xref>].</p>
        <p>Of the 57 studies, only 10 (18%) [<xref ref-type="bibr" rid="ref18">18</xref>,<xref ref-type="bibr" rid="ref19">19</xref>,<xref ref-type="bibr" rid="ref26">26</xref>,<xref ref-type="bibr" rid="ref27">27</xref>,<xref ref-type="bibr" rid="ref30">30</xref>,<xref ref-type="bibr" rid="ref34">34</xref>,<xref ref-type="bibr" rid="ref43">43</xref>,<xref ref-type="bibr" rid="ref61">61</xref>-<xref ref-type="bibr" rid="ref73">73</xref>] reported changes to the architecture of the GAN they were using. In the rest of the studies, no major changes were reported to the architecture of the GAN.</p>
        <fig id="figure3" position="float">
          <label>Figure 3</label>
          <caption>
            <p>Major applications of GANs in the included studies. The number of publications for each application is reflected by the size of the circle in the second-last layer. The numbers S1-S57 refer to the included studies. GAN: generative adversarial network.</p>
          </caption>
          <graphic xlink:href="medinform_v10i6e37365_fig3.png" alt-version="no" mimetype="image" position="float" xlink:type="simple"/>
        </fig>
      </sec>
      <sec>
        <title>Characteristics of the Data Sets</title>
        <p>The included studies applied GANs on lung radiology images obtained using various modalities. Specifically, the use of X-ray images dominated the studies. In total, 29 (51%) studies used X-ray images of lungs [<xref ref-type="bibr" rid="ref20">20</xref>,<xref ref-type="bibr" rid="ref21">21</xref>, <xref ref-type="bibr" rid="ref25">25</xref>, <xref ref-type="bibr" rid="ref27">27</xref>-<xref ref-type="bibr" rid="ref29">29</xref>, <xref ref-type="bibr" rid="ref31">31</xref>, <xref ref-type="bibr" rid="ref32">32</xref>, <xref ref-type="bibr" rid="ref35">35</xref>, <xref ref-type="bibr" rid="ref37">37</xref>, <xref ref-type="bibr" rid="ref40">40</xref>-<xref ref-type="bibr" rid="ref43">43</xref>, <xref ref-type="bibr" rid="ref45">45</xref>, <xref ref-type="bibr" rid="ref50">50</xref>, <xref ref-type="bibr" rid="ref52">52</xref>, <xref ref-type="bibr" rid="ref54">54</xref>, <xref ref-type="bibr" rid="ref56">56</xref>, <xref ref-type="bibr" rid="ref57">57</xref>, <xref ref-type="bibr" rid="ref59">59</xref>, <xref ref-type="bibr" rid="ref60">60</xref>, <xref ref-type="bibr" rid="ref62">62</xref>, <xref ref-type="bibr" rid="ref64">64</xref>, <xref ref-type="bibr" rid="ref65">65</xref>, <xref ref-type="bibr" rid="ref67">67</xref>, <xref ref-type="bibr" rid="ref70">70</xref>, <xref ref-type="bibr" rid="ref73">73</xref>, <xref ref-type="bibr" rid="ref74">74</xref>], while 21 (37%) studies used CT images [<xref ref-type="bibr" rid="ref18">18</xref>,<xref ref-type="bibr" rid="ref19">19</xref>,<xref ref-type="bibr" rid="ref22">22</xref>-<xref ref-type="bibr" rid="ref24">24</xref>,<xref ref-type="bibr" rid="ref26">26</xref>,<xref ref-type="bibr" rid="ref30">30</xref>,<xref ref-type="bibr" rid="ref33">33</xref>,<xref ref-type="bibr" rid="ref34">34</xref>,<xref ref-type="bibr" rid="ref36">36</xref>,<xref ref-type="bibr" rid="ref38">38</xref>,<xref ref-type="bibr" rid="ref48">48</xref>,<xref ref-type="bibr" rid="ref49">49</xref>,<xref ref-type="bibr" rid="ref51">51</xref>,<xref ref-type="bibr" rid="ref53">53</xref>,<xref ref-type="bibr" rid="ref55">55</xref>,<xref ref-type="bibr" rid="ref58">58</xref>,<xref ref-type="bibr" rid="ref61">61</xref>,<xref ref-type="bibr" rid="ref63">63</xref>,<xref ref-type="bibr" rid="ref66">66</xref>,<xref ref-type="bibr" rid="ref71">71</xref>], and 6 (11%) studies reported the use of both X-ray and CT images [<xref ref-type="bibr" rid="ref39">39</xref>,<xref ref-type="bibr" rid="ref44">44</xref>,<xref ref-type="bibr" rid="ref46">46</xref>,<xref ref-type="bibr" rid="ref47">47</xref>,<xref ref-type="bibr" rid="ref68">68</xref>,<xref ref-type="bibr" rid="ref72">72</xref>]. Only 1 (2%) study used ultrasound images for COVID-19 diagnosis [<xref ref-type="bibr" rid="ref69">69</xref>], which shows that ultrasound is not a popular imaging modality for training GANs and other deep learning models for COVID-19 detection (also see <xref rid="figure4" ref-type="fig">Figure 4</xref>). Of the 57 studies, most (n=47, 82%) used image data sets that are publicly available. In 10 (18%) studies, the data sets used are private. <xref ref-type="table" rid="table3">Table 3</xref> provides a list of the various data sets used in the included studies and whether they are publicly available data sets or private. The most commonly used data set was the COVIDx data set available on Github, used by 26 (46%) studies.</p>
        <fig id="figure4" position="float">
          <label>Figure 4</label>
          <caption>
            <p>Venn diagram showing the number of studies using CT vs X-ray images. Only 1 (2%) study reported the use of ultrasound images (not reflected here). CT: computed tomography.</p>
          </caption>
          <graphic xlink:href="medinform_v10i6e37365_fig4.png" alt-version="no" mimetype="image" position="float" xlink:type="simple"/>
        </fig>
        <table-wrap position="float" id="table3">
          <label>Table 3</label>
          <caption>
            <p>Resources of the data sets used in the included studies. The name is provided only if available.</p>
          </caption>
          <table width="1000" cellpadding="5" cellspacing="0" border="1" rules="groups" frame="hsides">
            <col width="500"/>
            <col width="250"/>
            <col width="250"/>
            <thead>
              <tr valign="top">
                <td>Platform (name)</td>
                <td>Public or private</td>
                <td>Modality of imaging</td>
              </tr>
            </thead>
            <tbody>
              <tr valign="top">
                <td>Kaggle</td>
                <td>Public [<xref ref-type="bibr" rid="ref75">75</xref>]</td>
                <td>CT<sup>a</sup></td>
              </tr>
              <tr valign="top">
                <td>Github</td>
                <td>Public [<xref ref-type="bibr" rid="ref76">76</xref>]</td>
                <td>CT</td>
              </tr>
              <tr valign="top">
                <td>Github</td>
                <td>Public [<xref ref-type="bibr" rid="ref77">77</xref>]</td>
                <td>CT</td>
              </tr>
              <tr valign="top">
                <td>Github (Covidx)</td>
                <td>Public [<xref ref-type="bibr" rid="ref78">78</xref>]</td>
                <td>X-ray, CT</td>
              </tr>
              <tr valign="top">
                <td>Github</td>
                <td>Public [<xref ref-type="bibr" rid="ref79">79</xref>]</td>
                <td>X-ray</td>
              </tr>
              <tr valign="top">
                <td>Kaggle (Tawsif)</td>
                <td>Public [<xref ref-type="bibr" rid="ref80">80</xref>]</td>
                <td>X-ray</td>
              </tr>
              <tr valign="top">
                <td>Github</td>
                <td>Public [<xref ref-type="bibr" rid="ref81">81</xref>]</td>
                <td>X-ray</td>
              </tr>
              <tr valign="top">
                <td>Kaggle</td>
                <td>Public [<xref ref-type="bibr" rid="ref82">82</xref>]</td>
                <td>X-ray</td>
              </tr>
              <tr valign="top">
                <td>Mendeley</td>
                <td>Public [<xref ref-type="bibr" rid="ref83">83</xref>]</td>
                <td>CT</td>
              </tr>
              <tr valign="top">
                <td>Website</td>
                <td>Public [<xref ref-type="bibr" rid="ref84">84</xref>]</td>
                <td>CT</td>
              </tr>
              <tr valign="top">
                <td>Kaggle (Allen Institute)</td>
                <td>Public [<xref ref-type="bibr" rid="ref85">85</xref>]</td>
                <td>CT</td>
              </tr>
              <tr valign="top">
                <td>Kaggle (RSNA)</td>
                <td>Public [<xref ref-type="bibr" rid="ref86">86</xref>]</td>
                <td>X-ray</td>
              </tr>
              <tr valign="top">
                <td>Website</td>
                <td>Public [<xref ref-type="bibr" rid="ref87">87</xref>]</td>
                <td>CT</td>
              </tr>
              <tr valign="top">
                <td>Github</td>
                <td>Public [<xref ref-type="bibr" rid="ref88">88</xref>]</td>
                <td>Ultrasound</td>
              </tr>
              <tr valign="top">
                <td>Kaggle</td>
                <td>Public [<xref ref-type="bibr" rid="ref89">89</xref>]</td>
                <td>X-ray</td>
              </tr>
              <tr valign="top">
                <td>Website (Italian Society of Medical and Interventional Radiology)</td>
                <td>Public [<xref ref-type="bibr" rid="ref90">90</xref>]</td>
                <td>X-ray</td>
              </tr>
              <tr valign="top">
                <td>First Affiliated Hospital of the University of Science and Technology China</td>
                <td>Private</td>
                <td>CT</td>
              </tr>
              <tr valign="top">
                <td>Massachusetts General Hospital, Brigham and Women's Hospital</td>
                <td>Private</td>
                <td>CT</td>
              </tr>
              <tr valign="top">
                <td>Comlejo Hospitalario Universitario de A Coruna Spain</td>
                <td>Private</td>
                <td>X-ray</td>
              </tr>
            </tbody>
          </table>
          <table-wrap-foot>
            <fn id="table3fn1">
              <p><sup>a</sup>CT: computed tomography.</p>
            </fn>
          </table-wrap-foot>
        </table-wrap>
        <p>The majority of the studies reported the size of the data set in terms of the number of images. The number of images used was greater than 10,000 in only 7 (12%) studies [<xref ref-type="bibr" rid="ref20">20</xref>,<xref ref-type="bibr" rid="ref22">22</xref>,<xref ref-type="bibr" rid="ref30">30</xref>,<xref ref-type="bibr" rid="ref39">39</xref>,<xref ref-type="bibr" rid="ref63">63</xref>,<xref ref-type="bibr" rid="ref66">66</xref>,<xref ref-type="bibr" rid="ref74">74</xref>], while 3 (5%) studies used images between 5000 and 10,000 [<xref ref-type="bibr" rid="ref33">33</xref>,<xref ref-type="bibr" rid="ref47">47</xref>,<xref ref-type="bibr" rid="ref64">64</xref>]. The most common range for the number of images used was 1000-5000 images used in 15 (26%) studies. Around one-fifth of the studies (n=11, 19%) used between 500 and 1000 images. In 11 (19%) other studies, the number of images used was less than 500. No study reported a number of images less than 100. The maximum number of images was 84,971, used by Uemura et al [<xref ref-type="bibr" rid="ref22">22</xref>]. Only a few of the studies reported the number of patients for whom the data were used: 1 (2%) study used data for more than 1000 patients [<xref ref-type="bibr" rid="ref26">26</xref>], 2 (4%) studies used data for 500-1000 patients [<xref ref-type="bibr" rid="ref29">29</xref>,<xref ref-type="bibr" rid="ref42">42</xref>], 6 (11%) studies used data for 100-500 patients [<xref ref-type="bibr" rid="ref19">19</xref>,<xref ref-type="bibr" rid="ref22">22</xref>,<xref ref-type="bibr" rid="ref24">24</xref>,<xref ref-type="bibr" rid="ref30">30</xref>,<xref ref-type="bibr" rid="ref38">38</xref>,<xref ref-type="bibr" rid="ref71">71</xref>], and 4 (7%) studies used data for less than 100 patients [<xref ref-type="bibr" rid="ref18">18</xref>,<xref ref-type="bibr" rid="ref49">49</xref>,<xref ref-type="bibr" rid="ref66">66</xref>,<xref ref-type="bibr" rid="ref69">69</xref>]. The number of patients was not reported in the rest of the studies.</p>
        <p>After augmentation using GANs, the studies increased the number of images to several thousand, with a maximum number of 21,295 [<xref ref-type="bibr" rid="ref54">54</xref>]. In 6 (11%) studies using GANs for data augmentation, the number of images increased to more than 10,000. In 3 (5%) studies, the number of images increased to 5000-10,000. In 9 (16%) studies, the number of images increased to 1000-5000, and in 2 (4%) studies, the number of images increased between 500 and 1000. No study reported data augmentation output below 500 images.</p>
      </sec>
      <sec>
        <title>Evaluation Mechanisms</title>
        <p>Generally, the popular metrics for evaluating the diagnosis and classification performances of neural networks are accuracy, precision, recall, dice score, and area under the receiver operating characteristic curve (AUROC). To evaluate the performance of neural networks for diagnosis of COVID-19, 38 (67%) of the 57 studies used accuracy, along with metrics such as precision, recall, and dice score [<xref ref-type="bibr" rid="ref21">21</xref>,<xref ref-type="bibr" rid="ref23">23</xref>-<xref ref-type="bibr" rid="ref28">28</xref>,<xref ref-type="bibr" rid="ref31">31</xref>-<xref ref-type="bibr" rid="ref34">34</xref>,<xref ref-type="bibr" rid="ref36">36</xref>,<xref ref-type="bibr" rid="ref38">38</xref>,<xref ref-type="bibr" rid="ref40">40</xref>,<xref ref-type="bibr" rid="ref43">43</xref>-<xref ref-type="bibr" rid="ref48">48</xref>,<xref ref-type="bibr" rid="ref52">52</xref>,<xref ref-type="bibr" rid="ref53">53</xref>,<xref ref-type="bibr" rid="ref55">55</xref>,<xref ref-type="bibr" rid="ref56">56</xref>,<xref ref-type="bibr" rid="ref58">58</xref>-<xref ref-type="bibr" rid="ref60">60</xref>,<xref ref-type="bibr" rid="ref63">63</xref>-<xref ref-type="bibr" rid="ref72">72</xref>,<xref ref-type="bibr" rid="ref74">74</xref>]. Around one-fourth of the studies (n=18, 32%) used sensitivity and specificity. In addition, 12 (21%) studies used the AUROC [<xref ref-type="bibr" rid="ref19">19</xref>,<xref ref-type="bibr" rid="ref20">20</xref>,<xref ref-type="bibr" rid="ref26">26</xref>,<xref ref-type="bibr" rid="ref30">30</xref>,<xref ref-type="bibr" rid="ref32">32</xref>,<xref ref-type="bibr" rid="ref46">46</xref>-<xref ref-type="bibr" rid="ref48">48</xref>,<xref ref-type="bibr" rid="ref50">50</xref>,<xref ref-type="bibr" rid="ref51">51</xref>,<xref ref-type="bibr" rid="ref68">68</xref>,<xref ref-type="bibr" rid="ref74">74</xref>]. The numbers do not add up, as many studies used more than 1 metric for evaluation. In addition to the metrics mentioned here, 1 (2%) study used additional metrics, namely concordance index and relative absolute error, to evaluate prognosis and survival prediction for patients with COVID-19 [<xref ref-type="bibr" rid="ref22">22</xref>].</p>
        <p>Likewise, the popular metrics used to assess the quality of the synthesized images are the structural similarity measure (SSIM), the peak signal-to-noise ratio (PSNR), and the Fréchet inception distance (FID). Of the 57 studies included, 6 (11%) used the SSIM [<xref ref-type="bibr" rid="ref18">18</xref>,<xref ref-type="bibr" rid="ref30">30</xref>,<xref ref-type="bibr" rid="ref49">49</xref>,<xref ref-type="bibr" rid="ref60">60</xref>-<xref ref-type="bibr" rid="ref62">62</xref>], 5 (9%) used the PSNR [<xref ref-type="bibr" rid="ref18">18</xref>,<xref ref-type="bibr" rid="ref30">30</xref>,<xref ref-type="bibr" rid="ref49">49</xref>,<xref ref-type="bibr" rid="ref61">61</xref>,<xref ref-type="bibr" rid="ref62">62</xref>], and 3 (5%) used the FID metric [<xref ref-type="bibr" rid="ref18">18</xref>,<xref ref-type="bibr" rid="ref43">43</xref>,<xref ref-type="bibr" rid="ref62">62</xref>] for evaluation.</p>
        <p>The majority of the studies (n=42, 74%) reported having the data split between independent training and test sets. A few of the studies (n=6, 11%) reported 5-fold or 10-fold cross-validation for training and evaluation of the model. For almost one-sixth of the studies (n=9, 16%), the information on cross-validation was not available.</p>
      </sec>
      <sec>
        <title>Reproducibility and Secondary Evaluation</title>
        <p>This review also summarizes the studies in which the authors provided the implementation code. Only 7 (12%) of the 57 studies provided links for their code [<xref ref-type="bibr" rid="ref19">19</xref>,<xref ref-type="bibr" rid="ref20">20</xref>,<xref ref-type="bibr" rid="ref34">34</xref>,<xref ref-type="bibr" rid="ref47">47</xref>,<xref ref-type="bibr" rid="ref48">48</xref>,<xref ref-type="bibr" rid="ref66">66</xref>,<xref ref-type="bibr" rid="ref70">70</xref>]. Only 2 (4%) studies reported a secondary evaluation by radiologists/doctors/experts by presenting the outcome of the results obtained by their models [<xref ref-type="bibr" rid="ref19">19</xref>,<xref ref-type="bibr" rid="ref45">45</xref>]. In addition, 1 (2%) study presented the results of end-to-end diagnosis of COVID-19 from CT images to 3 radiologists for a second opinion [<xref ref-type="bibr" rid="ref19">19</xref>], and 1 (2%) study presented synthetic X-ray images to 2 radiologists for a second opinion on the quality of the generated X-ray images [<xref ref-type="bibr" rid="ref45">45</xref>].</p>
      </sec>
    </sec>
    <sec sec-type="discussion">
      <title>Discussion</title>
      <sec>
        <title>Principal Findings</title>
        <p>In this review, a significant rise in the number of studies on the topic was found in 2021 compared to 2020. This makes sense as the first half of 2020 saw only initial cases of COVID-19 infection, and research on the use of GANs for COVID-19 had yet to gain pace. Lung radiology image data for COVID-19-positive examples gradually became available during this period and increased only in the latter part of 2020. The highest number of studies were published from China and India (n=22). There can be 2 possible reasons for this. First, the 2 countries hold the top 2 spots on the ranking of the world's most populous countries. Second, the COVID-19 pandemic started in China, hence prompting earlier research efforts there.</p>
        <p>Interestingly, the same number of studies (n=6) were published from the United States and Egypt each. The correlation mapping in <xref rid="figure5" ref-type="fig">Figure 5</xref> shows that most of the studies published in 2020 originated from China, India, Egypt, and Canada. However, in 2021, many other countries also contributed to the published research. The number of journal papers was twice that of conference papers. This is surprising as journal publications would typically require more time in paper processing compared to conferences. It can be possible that many authors turned to journal submissions as, during the start of the pandemic, many conferences were suspended initially before moving to the online (virtual) mode.</p>
        <p>In the majority of the included studies (n=39), the main task was to perform diagnosis of COVID-19 using lung CT or X-ray images. In these studies, a GAN was used as a submodule of the overall framework, and diagnosis was performed with the help of variants of CNNs, such as ResNet, VGG16, and Inception-net. In the included studies, GANs were used for 7 different purposes: data augmentation, segmentation of lungs within chest radiology images, superresolution of lung images to improve the quality of the images, diagnosis of COVID-19 within the images, feature extraction, prognosis studies related to COVID-19, and synthesis of 3D volumes of CT. Around 73% of the included studies used GAN-based methods for data augmentation to address the data scarcity challenge of COVID-19. It is not unexpected, as data augmentation is the most popular application of GANs. Only 1 study used the 3D variant of GAN for 3D synthesis of CT volumes. This is not surprising as 3D synthesis of CT volumes using 3D GANs is computationally expensive. The computations for the 3D synthesis of CT volumes may exceed the available resources of the graphics processing unit (GPU).</p>
        <p>Since there are many variants of GANs, this review also looked at the most commonly used GAN architecture in the included studies. The most common choice of GAN in the included studies was the cycleGAN used in 9 studies. The cycleGAN is a GAN architecture that comprises 2 generators and 2 discriminators and does not require pair-to-pair training data [<xref ref-type="bibr" rid="ref11">11</xref>]. Hence, it was a popular choice to generate COVID-19–positive images from normal images.</p>
        <p>This review analyzed the common imaging modality for the different applications related to COVID-19. As chest X-ray imaging and CT scans are the most popular imaging methods for studying the infection in individuals, the studies included in this review were those that used these 2 imaging modalities. Specifically, 35 studies used X-ray images, and 21 studies used CT images. Some of the studies (n=6) also used both CT and X-ray images for diagnosis by training different models or for the transformation of images from X-ray to CT. Though ultrasound imaging is not prevalent in the clinical diagnosis of COVID-19, 1 study reported using ultrasound images to diagnose COVID-19 with GANs. No other modality of imaging was used by the included studies.</p>
        <p>The majority of the included studies (n=47) used data that are available publicly on Github, Kaggle, or other publicly accessible websites. These data are acquired from multiple sources (eg, collected from more than 1 hospital or through crowdsourcing), which makes them more diverse and hence more useful for training of GAN models. Similarly, it is hoped that the use of publicly accessible data will also encourage other researchers to conduct experiments on the data sets. The rise of publications in 2021 can also be linked to the availability of publicly available data sets that continued to rise as the number of COVID-19 cases continued to grow. A few of the included studies (n=10) used private or proprietary data sets, and hence, the details about those data sets are only limited to what has been described in the corresponding studies.</p>
        <p>Only 13 studies provided information on the number of individuals whose data were used in the included studies. Among these, only 1 study used data for more than 1000 individuals [<xref ref-type="bibr" rid="ref26">26</xref>] and 2 studies used data for more than 500 individuals [<xref ref-type="bibr" rid="ref29">29</xref>,<xref ref-type="bibr" rid="ref42">42</xref>]. The remaining 10 studies used data for less than 500 individuals. Given the size of the population infected with COVID-19 (418+ million as of writing this, reported from John Hopkins University Coronavirus Resource Center [<xref ref-type="bibr" rid="ref91">91</xref>]), the need for experiments with much more extensive data is obvious. As a result of having more data, learning inherent features within the radiology images by using GANs will become more generalized with training on larger data. There is still more need to contribute to publicly accessible data.</p>
        <fig id="figure5" position="float">
          <label>Figure 5</label>
          <caption>
            <p>Mapping of correlation between publications from each country vs year of publication. Studies in 2020 originated mostly from China, India, Egypt, and Canada. In 2021, many other countries also contributed to the published research.</p>
          </caption>
          <graphic xlink:href="medinform_v10i6e37365_fig5.png" alt-version="no" mimetype="image" position="float" xlink:type="simple"/>
        </fig>
      </sec>
      <sec>
        <title>Practical and Research Implications</title>
        <p>This review presented the different studies that used GANs for various COVID-19 applications. Data augmentation of COVID-19 imaging data was the most common application in the included studies. The augmented data can significantly improve the training of AI methods, particularly deep learning methods used for COVID-19 diagnosis. This review found that for most of the studies, the current CT and X-ray imaging data (even if smaller in size) are already available through publicly accessible links on Github, Kaggle, or institutional websites. This should encourage more researchers to build upon the available data sets and train more variants of deep learning and GAN-based methods to speed up the research progress on COVID-19. Similarly, researchers can also add to the existing data set on Github by uploading their data to the current data repositories. An example of crowdsourcing of data is the COVIDx image repository for lung X-ray images (see <xref ref-type="table" rid="table3">Table 3</xref>).</p>
        <p>This review identified that the code to reproduce the results was not available for the majority of the studies. Only 7 of the included studies provided a public link to the code. Availability of a public repository to reproduce the results for diagnosis or augmented data can help in advancing the research as well as increase the trust and reliance on the reported results in terms of the quality of the generated images or the accuracy reports for the diagnosis. In addition, the reproducibility by this code was not assessed by this review, as it was beyond the scope of this review. Careful and responsible studies are needed to make an assessment of the published methods for transformation into clinical applications.</p>
        <p>The majority of the included studies (n=43) did not provide information on the number of patients, although they did mention the number of images used in the experiments. So, it is unclear how many images were used per individual. Hence, the lack of information limits the ability of the readers to evaluate the performance in the context of the number of patients. Moreover, for public data sets with crowd-sourced contributions, it is challenging to trace back the number of images to the number of individuals.</p>
        <p>Validation of the performance of GANs in terms of the quality/usability of the generated images has a significant role in promoting the acceptability of the methods. Of the included studies, only 2 studies reported that the results were presented to radiologists/clinicians for a secondary validation. In 1 study on the synthesis of X-ray images, the radiologists agreed that the quality of the X-rays has improved but falls short of diagnostic quality for use in clinics [<xref ref-type="bibr" rid="ref45">45</xref>]. Although using GAN-based methods in COVID-19 is tempting for many researchers, the lack of evaluation by radiologists or using GAN-based methods without radiologists and clinicians in the loop will hinder the acceptability of these methods for clinical applications. In addition, it is beyond the scope of this review to evaluate a study based on reporting of secondary evaluation by the radiologists, though a secondary assessment by the radiologists would have added value to the studies and increased their acceptability. The lack of details related to the individuals whose COVID-19 data were used in these studies may also hinder their acceptance for transformation into clinical applications. The training of GANs is usually computationally demanding, requiring GPUs. More edge computing–based implementations are needed for clinical applications to make these models compatible for implementation on low-power devices. This will increase the acceptability of these methods in clinical devices.</p>
      </sec>
      <sec>
        <title>Strengths and Limitations</title>
        <sec>
          <title>Strengths</title>
          <p>Though several reviews can be found on the applications of AI techniques in COVID-19, no review was found that focused on the potential of GAN-based methods to combat COVID-19. Compared to other reviews [<xref ref-type="bibr" rid="ref3">3</xref>,<xref ref-type="bibr" rid="ref4">4</xref>,<xref ref-type="bibr" rid="ref6">6</xref>,<xref ref-type="bibr" rid="ref7">7</xref>] where the scope is too broad as they attempted to cover many different AI models, this review provided a comprehensive analysis of the GAN-based approaches used primarily on lung CT and X-ray images. Similarly, many reviews covered the applications of GANs in medical imaging [<xref ref-type="bibr" rid="ref10">10</xref>,<xref ref-type="bibr" rid="ref12">12</xref>-<xref ref-type="bibr" rid="ref15">15</xref>]; their applications in lung images for COVID-19 have not been reviewed before. So, this review may be considered the first comprehensive review that covers all the GAN-based methods used for COVID-19 imaging data for different applications in general and data augmentation in particular. Thus, it is helpful for the readers to understand how GAN-based approaches were used to address the problem of data scarcity and how the synthetic data (generated by GANs) were used to improve the performance of CNNs for COVID-19. This review provided a thorough list of the various publicly available data sets of lung CT, lung X-ray, and lung ultrasound images. Hence, this can serve as a single point of contact for the readers to explore these data set resources and use them in their research work. This review is consistent with the PRISMA-ScR guidelines for scientific reviews [<xref ref-type="bibr" rid="ref16">16</xref>].</p>
        </sec>
        <sec>
          <title>Limitations</title>
          <p>This review included studies from 5 databases: PubMed, IEEEXplore, ACM Digital Library, Scopus, and Google Scholar. Hence, it is possible that some literature that is not indexed in these libraries might have been left out. However, given the coverage by these popular databases, the included studies form a comprehensive representation of the applications of GANs in COVID-19. The review, for practical reasons, included studies published only in English and did not include studies in other languages. Since the scope of this review was limited to lung images only, the potential of GANs for other types of medical data, such as electronic health records, textual data, and audio data (recordings of coughing), was not covered in this review. The results and interpretations presented in this review are derived from the available information in the included studies. Since different studies may have variations and even missing details in their reporting of the data set, the training and test sets, and the validation mechanism, a direct comparison of the results might not be possible. Inconsistent information on the number of images, the training mechanism for GANs, and the selection of test set examples may have affected the findings of this review. In addition, by modern standards of training deep learning models, the size of data reported in most included studies is too small. So, the results reported in the studies in terms of diagnosis accuracy may not generalize well. The findings and the discussions of this review are mainly based on the authors’ understanding of GANs (and other AI methods) and do not necessarily reflect the comments and feedback of the doctors and clinicians.</p>
        </sec>
      </sec>
      <sec>
        <title>Conclusion</title>
        <p>This scoping review provided a comprehensive review of 57 studies on the use of GANs for COVID-19 lung imaging data. Similar to other deep learning and AI methods, GANs have demonstrated outstanding potential in research on addressing COVID-19 diagnosis performance. However, the most significant application of GANs has been data augmentation by generating synthetic chest CT or X-ray imaging data from the existing limited-size data, as the synthetic data showed a direct bearing on the enhancement of the diagnosis. Although GAN-based methods have demonstrated great potential, their adoption in COVID-19 research is still in a stage of infancy. Notably, the transformation of GAN-based methods into clinical applications is still limited due to the limitations in the validation of the results, the generalization of the results, the lack of feedback from radiologists, and the limited explainability offered by these methods. Nevertheless, GAN-based methods can assist in the performance enhancement of COVID-19 diagnosis, even though they should not be used as independent tools. In addition, more research and advancements are needed toward the explainability and clinical transformations of these methods. This will pave the way for a broader acceptance of GAN-based methods in COVID-19 applications.</p>
      </sec>
    </sec>
  </body>
  <back>
    <app-group>
      <supplementary-material id="app1">
        <label>Multimedia Appendix 1</label>
        <p>Search strategy.</p>
        <media xlink:href="medinform_v10i6e37365_app1.docx" xlink:title="DOCX File , 19 KB"/>
      </supplementary-material>
      <supplementary-material id="app2">
        <label>Multimedia Appendix 2</label>
        <p>Interrater agreement matrices for study selection steps.</p>
        <media xlink:href="medinform_v10i6e37365_app2.docx" xlink:title="DOCX File , 22 KB"/>
      </supplementary-material>
      <supplementary-material id="app3">
        <label>Multimedia Appendix 3</label>
        <p>Data extraction form.</p>
        <media xlink:href="medinform_v10i6e37365_app3.docx" xlink:title="DOCX File , 24 KB"/>
      </supplementary-material>
      <supplementary-material id="app4">
        <label>Multimedia Appendix 4</label>
        <p>Characteristics of the included studies.</p>
        <media xlink:href="medinform_v10i6e37365_app4.xlsx" xlink:title="XLSX File  (Microsoft Excel File), 23 KB"/>
      </supplementary-material>
    </app-group>
    <glossary>
      <title>Abbreviations</title>
      <def-list>
        <def-item>
          <term id="abb1">ACM</term>
          <def>
            <p>Association for Computing Machinery</p>
          </def>
        </def-item>
        <def-item>
          <term id="abb2">AI</term>
          <def>
            <p>artificial intelligence</p>
          </def>
        </def-item>
        <def-item>
          <term id="abb3">AUROC</term>
          <def>
            <p>area under the receiver operating characteristic curve</p>
          </def>
        </def-item>
        <def-item>
          <term id="abb4">CNN</term>
          <def>
            <p>convolutional neural network</p>
          </def>
        </def-item>
        <def-item>
          <term id="abb5">CT</term>
          <def>
            <p>computed tomography</p>
          </def>
        </def-item>
        <def-item>
          <term id="abb6">FID</term>
          <def>
            <p>Fréchet inception distance</p>
          </def>
        </def-item>
        <def-item>
          <term id="abb7">GAN</term>
          <def>
            <p>generative adversarial network</p>
          </def>
        </def-item>
        <def-item>
          <term id="abb8">GPU</term>
          <def>
            <p>graphics processing unit</p>
          </def>
        </def-item>
        <def-item>
          <term id="abb9">PRISMA-ScR</term>
          <def>
            <p>Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews</p>
          </def>
        </def-item>
        <def-item>
          <term id="abb10">PSNR</term>
          <def>
            <p>peak signal-to-noise ratio</p>
          </def>
        </def-item>
        <def-item>
          <term id="abb11">RT-PCR</term>
          <def>
            <p>reverse transcription–polymerase chain reaction</p>
          </def>
        </def-item>
        <def-item>
          <term id="abb12">SSIM</term>
          <def>
            <p>structural similarity measure</p>
          </def>
        </def-item>
      </def-list>
    </glossary>
    <ack>
      <p>HA contributed to the conception, design, literature search, data selection, data synthesis, data extraction, and drafting. ZS contributed to the design, data selection, data synthesis, and critical revision of the manuscript. All authors gave their final approval and accepted accountability for all aspects of the work.</p>
    </ack>
    <fn-group>
      <fn fn-type="conflict">
        <p>None declared.</p>
      </fn>
    </fn-group>
    <ref-list>
      <ref id="ref1">
        <label>1</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Chen</surname>
              <given-names>Y</given-names>
            </name>
            <name name-style="western">
              <surname>Li</surname>
              <given-names>L</given-names>
            </name>
          </person-group>
          <article-title>SARS-CoV-2: virus dynamics and host response</article-title>
          <source>Lancet Infect Dis</source>
          <year>2020</year>
          <month>05</month>
          <volume>20</volume>
          <issue>5</issue>
          <fpage>515</fpage>
          <lpage>516</lpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://ehoonline.biomedcentral.com/articles/10.1016/S1473-3099(20)30235-8"/>
          </comment>
          <pub-id pub-id-type="doi">10.1016/S1473-3099(20)30235-8</pub-id>
          <pub-id pub-id-type="medline">32213336</pub-id>
          <pub-id pub-id-type="pii">S1473-3099(20)30235-8</pub-id>
          <pub-id pub-id-type="pmcid">PMC7156233</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref2">
        <label>2</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Li</surname>
              <given-names>Y</given-names>
            </name>
            <name name-style="western">
              <surname>Yao</surname>
              <given-names>L</given-names>
            </name>
            <name name-style="western">
              <surname>Li</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Chen</surname>
              <given-names>L</given-names>
            </name>
            <name name-style="western">
              <surname>Song</surname>
              <given-names>Y</given-names>
            </name>
            <name name-style="western">
              <surname>Cai</surname>
              <given-names>Z</given-names>
            </name>
            <name name-style="western">
              <surname>Yang</surname>
              <given-names>C</given-names>
            </name>
          </person-group>
          <article-title>Stability issues of RT-PCR testing of SARS-CoV-2 for hospitalized patients clinically diagnosed with COVID-19</article-title>
          <source>J Med Virol</source>
          <year>2020</year>
          <month>07</month>
          <day>26</day>
          <volume>92</volume>
          <issue>7</issue>
          <fpage>903</fpage>
          <lpage>908</lpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="http://europepmc.org/abstract/MED/32219885"/>
          </comment>
          <pub-id pub-id-type="doi">10.1002/jmv.25786</pub-id>
          <pub-id pub-id-type="medline">32219885</pub-id>
          <pub-id pub-id-type="pmcid">PMC7228231</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref3">
        <label>3</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Abd-Alrazaq</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>Alajlani</surname>
              <given-names>M</given-names>
            </name>
            <name name-style="western">
              <surname>Alhuwail</surname>
              <given-names>D</given-names>
            </name>
            <name name-style="western">
              <surname>Schneider</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Al-Kuwari</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Shah</surname>
              <given-names>Z</given-names>
            </name>
            <name name-style="western">
              <surname>Hamdi</surname>
              <given-names>M</given-names>
            </name>
            <name name-style="western">
              <surname>Househ</surname>
              <given-names>M</given-names>
            </name>
          </person-group>
          <article-title>Artificial intelligence in the fight against COVID-19: scoping review</article-title>
          <source>J Med Internet Res</source>
          <year>2020</year>
          <month>12</month>
          <day>15</day>
          <volume>22</volume>
          <issue>12</issue>
          <fpage>e20756</fpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://www.jmir.org/2020/12/e20756/"/>
          </comment>
          <pub-id pub-id-type="doi">10.2196/20756</pub-id>
          <pub-id pub-id-type="medline">33284779</pub-id>
          <pub-id pub-id-type="pii">v22i12e20756</pub-id>
          <pub-id pub-id-type="pmcid">PMC7744141</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref4">
        <label>4</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Wang</surname>
              <given-names>L</given-names>
            </name>
            <name name-style="western">
              <surname>Zhang</surname>
              <given-names>Y</given-names>
            </name>
            <name name-style="western">
              <surname>Wang</surname>
              <given-names>D</given-names>
            </name>
            <name name-style="western">
              <surname>Tong</surname>
              <given-names>X</given-names>
            </name>
            <name name-style="western">
              <surname>Liu</surname>
              <given-names>T</given-names>
            </name>
            <name name-style="western">
              <surname>Zhang</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Huang</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Zhang</surname>
              <given-names>L</given-names>
            </name>
            <name name-style="western">
              <surname>Chen</surname>
              <given-names>L</given-names>
            </name>
            <name name-style="western">
              <surname>Fan</surname>
              <given-names>H</given-names>
            </name>
            <name name-style="western">
              <surname>Clarke</surname>
              <given-names>M</given-names>
            </name>
          </person-group>
          <article-title>Artificial intelligence for COVID-19: a systematic review</article-title>
          <source>Front Med (Lausanne)</source>
          <year>2021</year>
          <month>9</month>
          <day>30</day>
          <volume>8</volume>
          <fpage>704256</fpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://doi.org/10.3389/fmed.2021.704256"/>
          </comment>
          <pub-id pub-id-type="doi">10.3389/fmed.2021.704256</pub-id>
          <pub-id pub-id-type="medline">34660623</pub-id>
          <pub-id pub-id-type="pmcid">PMC8514781</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref5">
        <label>5</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Chowdhury</surname>
              <given-names>MEH</given-names>
            </name>
            <name name-style="western">
              <surname>Rahman</surname>
              <given-names>T</given-names>
            </name>
            <name name-style="western">
              <surname>Khandakar</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>Mazhar</surname>
              <given-names>R</given-names>
            </name>
            <name name-style="western">
              <surname>Kadir</surname>
              <given-names>MA</given-names>
            </name>
            <name name-style="western">
              <surname>Mahbub</surname>
              <given-names>ZB</given-names>
            </name>
            <name name-style="western">
              <surname>Islam</surname>
              <given-names>KR</given-names>
            </name>
            <name name-style="western">
              <surname>Khan</surname>
              <given-names>MS</given-names>
            </name>
            <name name-style="western">
              <surname>Iqbal</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>Emadi</surname>
              <given-names>NA</given-names>
            </name>
            <name name-style="western">
              <surname>Reaz</surname>
              <given-names>MBI</given-names>
            </name>
            <name name-style="western">
              <surname>Islam</surname>
              <given-names>MT</given-names>
            </name>
          </person-group>
          <article-title>Can AI help in screening viral and COVID-19 pneumonia?</article-title>
          <source>IEEE Access</source>
          <year>2020</year>
          <volume>8</volume>
          <fpage>132665</fpage>
          <lpage>132676</lpage>
          <pub-id pub-id-type="doi">10.1109/access.2020.3010287</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref6">
        <label>6</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Alimadadi</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>Aryal</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Manandhar</surname>
              <given-names>I</given-names>
            </name>
            <name name-style="western">
              <surname>Munroe</surname>
              <given-names>PB</given-names>
            </name>
            <name name-style="western">
              <surname>Joe</surname>
              <given-names>B</given-names>
            </name>
            <name name-style="western">
              <surname>Cheng</surname>
              <given-names>X</given-names>
            </name>
          </person-group>
          <article-title>Artificial intelligence and machine learning to fight COVID-19</article-title>
          <source>Physiol Genomics</source>
          <year>2020</year>
          <month>04</month>
          <day>01</day>
          <volume>52</volume>
          <issue>4</issue>
          <fpage>200</fpage>
          <lpage>202</lpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="http://europepmc.org/abstract/MED/32216577"/>
          </comment>
          <pub-id pub-id-type="doi">10.1152/physiolgenomics.00029.2020</pub-id>
          <pub-id pub-id-type="medline">32216577</pub-id>
          <pub-id pub-id-type="pmcid">PMC7191426</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref7">
        <label>7</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Latif</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Usman</surname>
              <given-names>M</given-names>
            </name>
            <name name-style="western">
              <surname>Manzoor</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Iqbal</surname>
              <given-names>W</given-names>
            </name>
            <name name-style="western">
              <surname>Qadir</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Tyson</surname>
              <given-names>G</given-names>
            </name>
            <name name-style="western">
              <surname>Castro</surname>
              <given-names>I</given-names>
            </name>
            <name name-style="western">
              <surname>Razi</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>Boulos</surname>
              <given-names>MNK</given-names>
            </name>
            <name name-style="western">
              <surname>Weller</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>Crowcroft</surname>
              <given-names>J</given-names>
            </name>
          </person-group>
          <article-title>Leveraging data science to combat COVID-19: a comprehensive review</article-title>
          <source>IEEE Trans Artif Intell</source>
          <year>2020</year>
          <month>8</month>
          <volume>1</volume>
          <issue>1</issue>
          <fpage>85</fpage>
          <lpage>103</lpage>
          <pub-id pub-id-type="doi">10.1109/tai.2020.3020521</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref8">
        <label>8</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Ali</surname>
              <given-names>H</given-names>
            </name>
            <name name-style="western">
              <surname>Umander</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Rohlen</surname>
              <given-names>R</given-names>
            </name>
            <name name-style="western">
              <surname>Gronlund</surname>
              <given-names>C</given-names>
            </name>
          </person-group>
          <article-title>A deep learning pipeline for identification of motor units in musculoskeletal ultrasound</article-title>
          <source>IEEE Access</source>
          <year>2020</year>
          <volume>8</volume>
          <fpage>170595</fpage>
          <lpage>170608</lpage>
          <pub-id pub-id-type="doi">10.1109/access.2020.3023495</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref9">
        <label>9</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Iqbal</surname>
              <given-names>T</given-names>
            </name>
            <name name-style="western">
              <surname>Ali</surname>
              <given-names>H</given-names>
            </name>
          </person-group>
          <article-title>Generative adversarial network for medical images (MI-GAN)</article-title>
          <source>J Med Syst</source>
          <year>2018</year>
          <month>10</month>
          <day>12</day>
          <volume>42</volume>
          <issue>11</issue>
          <fpage>231</fpage>
          <pub-id pub-id-type="doi">10.1007/s10916-018-1072-9</pub-id>
          <pub-id pub-id-type="medline">30315368</pub-id>
          <pub-id pub-id-type="pii">10.1007/s10916-018-1072-9</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref10">
        <label>10</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Yi</surname>
              <given-names>X</given-names>
            </name>
            <name name-style="western">
              <surname>Walia</surname>
              <given-names>E</given-names>
            </name>
            <name name-style="western">
              <surname>Babyn</surname>
              <given-names>P</given-names>
            </name>
          </person-group>
          <article-title>Generative adversarial network in medical imaging: a review</article-title>
          <source>Med Image Anal</source>
          <year>2019</year>
          <month>12</month>
          <volume>58</volume>
          <fpage>101552</fpage>
          <pub-id pub-id-type="doi">10.1016/j.media.2019.101552</pub-id>
          <pub-id pub-id-type="medline">31521965</pub-id>
          <pub-id pub-id-type="pii">S1361-8415(18)30843-0</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref11">
        <label>11</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Zhu</surname>
              <given-names>J-Y</given-names>
            </name>
            <name name-style="western">
              <surname>Park</surname>
              <given-names>T</given-names>
            </name>
            <name name-style="western">
              <surname>Isola</surname>
              <given-names>P</given-names>
            </name>
            <name name-style="western">
              <surname>Efros</surname>
              <given-names>AA</given-names>
            </name>
          </person-group>
          <article-title>Unpaired image-to-image translation using cycle-consistent adversarial networks</article-title>
          <source>Proc IEEE Int Conf Comput Vis</source>
          <year>2017</year>
          <fpage>2223</fpage>
          <lpage>2232</lpage>
          <pub-id pub-id-type="doi">10.1109/iccv.2017.244</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref12">
        <label>12</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Lan</surname>
              <given-names>L</given-names>
            </name>
            <name name-style="western">
              <surname>You</surname>
              <given-names>L</given-names>
            </name>
            <name name-style="western">
              <surname>Zhang</surname>
              <given-names>Z</given-names>
            </name>
            <name name-style="western">
              <surname>Fan</surname>
              <given-names>Z</given-names>
            </name>
            <name name-style="western">
              <surname>Zhao</surname>
              <given-names>W</given-names>
            </name>
            <name name-style="western">
              <surname>Zeng</surname>
              <given-names>N</given-names>
            </name>
            <name name-style="western">
              <surname>Chen</surname>
              <given-names>Y</given-names>
            </name>
            <name name-style="western">
              <surname>Zhou</surname>
              <given-names>X</given-names>
            </name>
          </person-group>
          <article-title>Generative adversarial networks and its applications in biomedical informatics</article-title>
          <source>Front Public Health</source>
          <year>2020</year>
          <month>5</month>
          <day>12</day>
          <volume>8</volume>
          <fpage>164</fpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://doi.org/10.3389/fpubh.2020.00164"/>
          </comment>
          <pub-id pub-id-type="doi">10.3389/fpubh.2020.00164</pub-id>
          <pub-id pub-id-type="medline">32478029</pub-id>
          <pub-id pub-id-type="pmcid">PMC7235323</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref13">
        <label>13</label>
        <nlm-citation citation-type="book">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Singh</surname>
              <given-names>NK</given-names>
            </name>
            <name name-style="western">
              <surname>Raza</surname>
              <given-names>K</given-names>
            </name>
          </person-group>
          <person-group person-group-type="editor">
            <name name-style="western">
              <surname>Patgiri</surname>
              <given-names>R</given-names>
            </name>
            <name name-style="western">
              <surname>Biswas</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>Roy</surname>
              <given-names>P</given-names>
            </name>
          </person-group>
          <article-title>Medical image generation using generative adversarial networks: a review</article-title>
          <source>Health Informatics: A Computational Perspective in Healthcare. Studies in Computational Intelligence, Volume 932</source>
          <year>2021</year>
          <publisher-loc>Singapore</publisher-loc>
          <publisher-name>Springer</publisher-name>
        </nlm-citation>
      </ref>
      <ref id="ref14">
        <label>14</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Wang</surname>
              <given-names>T</given-names>
            </name>
            <name name-style="western">
              <surname>Lei</surname>
              <given-names>Y</given-names>
            </name>
            <name name-style="western">
              <surname>Fu</surname>
              <given-names>Y</given-names>
            </name>
            <name name-style="western">
              <surname>Wynne</surname>
              <given-names>JF</given-names>
            </name>
            <name name-style="western">
              <surname>Curran</surname>
              <given-names>WJ</given-names>
            </name>
            <name name-style="western">
              <surname>Liu</surname>
              <given-names>T</given-names>
            </name>
            <name name-style="western">
              <surname>Yang</surname>
              <given-names>X</given-names>
            </name>
          </person-group>
          <article-title>A review on medical imaging synthesis using deep learning and its clinical applications</article-title>
          <source>J Appl Clin Med Phys</source>
          <year>2021</year>
          <month>01</month>
          <day>11</day>
          <volume>22</volume>
          <issue>1</issue>
          <fpage>11</fpage>
          <lpage>36</lpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="http://europepmc.org/abstract/MED/33305538"/>
          </comment>
          <pub-id pub-id-type="doi">10.1002/acm2.13121</pub-id>
          <pub-id pub-id-type="medline">33305538</pub-id>
          <pub-id pub-id-type="pmcid">PMC7856512</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref15">
        <label>15</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Saeed</surname>
              <given-names>AQ</given-names>
            </name>
            <name name-style="western">
              <surname>Sheikh Abdullah</surname>
              <given-names>SNH</given-names>
            </name>
            <name name-style="western">
              <surname>Che-Hamzah</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Abdul Ghani</surname>
              <given-names>AT</given-names>
            </name>
          </person-group>
          <article-title>Accuracy of using generative adversarial networks for glaucoma detection: systematic review and bibliometric analysis</article-title>
          <source>J Med Internet Res</source>
          <year>2021</year>
          <month>09</month>
          <day>21</day>
          <volume>23</volume>
          <issue>9</issue>
          <fpage>e27414</fpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://www.jmir.org/2021/9/e27414/"/>
          </comment>
          <pub-id pub-id-type="doi">10.2196/27414</pub-id>
          <pub-id pub-id-type="medline">34236992</pub-id>
          <pub-id pub-id-type="pii">v23i9e27414</pub-id>
          <pub-id pub-id-type="pmcid">PMC8493455</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref16">
        <label>16</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Tricco</surname>
              <given-names>AC</given-names>
            </name>
            <name name-style="western">
              <surname>Lillie</surname>
              <given-names>E</given-names>
            </name>
            <name name-style="western">
              <surname>Zarin</surname>
              <given-names>W</given-names>
            </name>
            <name name-style="western">
              <surname>O'Brien</surname>
              <given-names>KK</given-names>
            </name>
            <name name-style="western">
              <surname>Colquhoun</surname>
              <given-names>H</given-names>
            </name>
            <name name-style="western">
              <surname>Levac</surname>
              <given-names>D</given-names>
            </name>
            <name name-style="western">
              <surname>Moher</surname>
              <given-names>D</given-names>
            </name>
            <name name-style="western">
              <surname>Peters</surname>
              <given-names>MD</given-names>
            </name>
            <name name-style="western">
              <surname>Horsley</surname>
              <given-names>T</given-names>
            </name>
            <name name-style="western">
              <surname>Weeks</surname>
              <given-names>L</given-names>
            </name>
            <name name-style="western">
              <surname>Hempel</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Akl</surname>
              <given-names>EA</given-names>
            </name>
            <name name-style="western">
              <surname>Chang</surname>
              <given-names>C</given-names>
            </name>
            <name name-style="western">
              <surname>McGowan</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Stewart</surname>
              <given-names>L</given-names>
            </name>
            <name name-style="western">
              <surname>Hartling</surname>
              <given-names>L</given-names>
            </name>
            <name name-style="western">
              <surname>Aldcroft</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>Wilson</surname>
              <given-names>MG</given-names>
            </name>
            <name name-style="western">
              <surname>Garritty</surname>
              <given-names>C</given-names>
            </name>
            <name name-style="western">
              <surname>Lewin</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Godfrey</surname>
              <given-names>CM</given-names>
            </name>
            <name name-style="western">
              <surname>Macdonald</surname>
              <given-names>MT</given-names>
            </name>
            <name name-style="western">
              <surname>Langlois</surname>
              <given-names>EV</given-names>
            </name>
            <name name-style="western">
              <surname>Soares-Weiser</surname>
              <given-names>K</given-names>
            </name>
            <name name-style="western">
              <surname>Moriarty</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Clifford</surname>
              <given-names>T</given-names>
            </name>
            <collab>Tunçalp</collab>
            <name name-style="western">
              <surname>Straus</surname>
              <given-names>SE</given-names>
            </name>
          </person-group>
          <article-title>PRISMA Extension for Scoping Reviews (PRISMA-ScR): checklist and explanation</article-title>
          <source>Ann Intern Med</source>
          <year>2018</year>
          <month>10</month>
          <day>02</day>
          <volume>169</volume>
          <issue>7</issue>
          <fpage>467</fpage>
          <lpage>473</lpage>
          <pub-id pub-id-type="doi">10.7326/m18-0850</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref17">
        <label>17</label>
        <nlm-citation citation-type="book">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Higgins</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Deeks</surname>
              <given-names>J</given-names>
            </name>
          </person-group>
          <article-title>Chapter 7: selecting studies and collecting data</article-title>
          <source>Cochrane Handbook for Systematic Reviews of Interventions</source>
          <year>2008</year>
          <publisher-loc>Hoboken, NJ</publisher-loc>
          <publisher-name>John Wiley &#38; Sons</publisher-name>
        </nlm-citation>
      </ref>
      <ref id="ref18">
        <label>18</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Jiang</surname>
              <given-names>Y</given-names>
            </name>
            <name name-style="western">
              <surname>Chen</surname>
              <given-names>H</given-names>
            </name>
            <name name-style="western">
              <surname>Loew</surname>
              <given-names>M</given-names>
            </name>
            <name name-style="western">
              <surname>Ko</surname>
              <given-names>H</given-names>
            </name>
          </person-group>
          <article-title>COVID-19 CT image synthesis with a conditional generative adversarial network</article-title>
          <source>IEEE J Biomed Health Inform</source>
          <year>2021</year>
          <month>2</month>
          <volume>25</volume>
          <issue>2</issue>
          <fpage>441</fpage>
          <lpage>452</lpage>
          <pub-id pub-id-type="doi">10.1109/jbhi.2020.3042523</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref19">
        <label>19</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Song</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Wang</surname>
              <given-names>H</given-names>
            </name>
            <name name-style="western">
              <surname>Liu</surname>
              <given-names>Y</given-names>
            </name>
            <name name-style="western">
              <surname>Wu</surname>
              <given-names>W</given-names>
            </name>
            <name name-style="western">
              <surname>Dai</surname>
              <given-names>G</given-names>
            </name>
            <name name-style="western">
              <surname>Wu</surname>
              <given-names>Z</given-names>
            </name>
            <name name-style="western">
              <surname>Zhu</surname>
              <given-names>P</given-names>
            </name>
            <name name-style="western">
              <surname>Zhang</surname>
              <given-names>W</given-names>
            </name>
            <name name-style="western">
              <surname>Yeom</surname>
              <given-names>KW</given-names>
            </name>
            <name name-style="western">
              <surname>Deng</surname>
              <given-names>K</given-names>
            </name>
          </person-group>
          <article-title>End-to-end automatic differentiation of the coronavirus disease 2019 (COVID-19) from viral pneumonia based on chest CT</article-title>
          <source>Eur J Nucl Med Mol Imaging</source>
          <year>2020</year>
          <month>10</month>
          <day>22</day>
          <volume>47</volume>
          <issue>11</issue>
          <fpage>2516</fpage>
          <lpage>2524</lpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="http://europepmc.org/abstract/MED/32567006"/>
          </comment>
          <pub-id pub-id-type="doi">10.1007/s00259-020-04929-1</pub-id>
          <pub-id pub-id-type="medline">32567006</pub-id>
          <pub-id pub-id-type="pii">10.1007/s00259-020-04929-1</pub-id>
          <pub-id pub-id-type="pmcid">PMC7306401</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref20">
        <label>20</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Motamed</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Rogalla</surname>
              <given-names>P</given-names>
            </name>
            <name name-style="western">
              <surname>Khalvati</surname>
              <given-names>F</given-names>
            </name>
          </person-group>
          <article-title>RANDGAN: randomized generative adversarial network for detection of COVID-19 in chest X-ray</article-title>
          <source>Sci Rep</source>
          <year>2021</year>
          <month>04</month>
          <day>21</day>
          <volume>11</volume>
          <issue>1</issue>
          <fpage>8602</fpage>
          <lpage>2524</lpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://doi.org/10.1038/s41598-021-87994-2"/>
          </comment>
          <pub-id pub-id-type="doi">10.1038/s41598-021-87994-2</pub-id>
          <pub-id pub-id-type="medline">33883609</pub-id>
          <pub-id pub-id-type="pii">10.1038/s41598-021-87994-2</pub-id>
          <pub-id pub-id-type="pmcid">PMC8060427</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref21">
        <label>21</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Autee</surname>
              <given-names>P</given-names>
            </name>
            <name name-style="western">
              <surname>Bagwe</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Shah</surname>
              <given-names>V</given-names>
            </name>
            <name name-style="western">
              <surname>Srivastava</surname>
              <given-names>K</given-names>
            </name>
          </person-group>
          <article-title>StackNet-DenVIS: a multi-layer perceptron stacked ensembling approach for COVID-19 detection using X-ray images</article-title>
          <source>Phys Eng Sci Med</source>
          <year>2020</year>
          <month>12</month>
          <day>04</day>
          <volume>43</volume>
          <issue>4</issue>
          <fpage>1399</fpage>
          <lpage>1414</lpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="http://europepmc.org/abstract/MED/33275187"/>
          </comment>
          <pub-id pub-id-type="doi">10.1007/s13246-020-00952-6</pub-id>
          <pub-id pub-id-type="medline">33275187</pub-id>
          <pub-id pub-id-type="pii">10.1007/s13246-020-00952-6</pub-id>
          <pub-id pub-id-type="pmcid">PMC7715648</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref22">
        <label>22</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Uemura</surname>
              <given-names>T</given-names>
            </name>
            <name name-style="western">
              <surname>Näppi</surname>
              <given-names>JJ</given-names>
            </name>
            <name name-style="western">
              <surname>Watari</surname>
              <given-names>C</given-names>
            </name>
            <name name-style="western">
              <surname>Hironaka</surname>
              <given-names>T</given-names>
            </name>
            <name name-style="western">
              <surname>Kamiya</surname>
              <given-names>T</given-names>
            </name>
            <name name-style="western">
              <surname>Yoshida</surname>
              <given-names>H</given-names>
            </name>
          </person-group>
          <article-title>Weakly unsupervised conditional generative adversarial network for image-based prognostic prediction for COVID-19 patients based on chest CT</article-title>
          <source>Med Image Anal</source>
          <year>2021</year>
          <month>10</month>
          <volume>73</volume>
          <fpage>102159</fpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://linkinghub.elsevier.com/retrieve/pii/S1361-8415(21)00205-X"/>
          </comment>
          <pub-id pub-id-type="doi">10.1016/j.media.2021.102159</pub-id>
          <pub-id pub-id-type="medline">34303892</pub-id>
          <pub-id pub-id-type="pii">S1361-8415(21)00205-X</pub-id>
          <pub-id pub-id-type="pmcid">PMC8272947</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref23">
        <label>23</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Goel</surname>
              <given-names>T</given-names>
            </name>
            <name name-style="western">
              <surname>Murugan</surname>
              <given-names>R</given-names>
            </name>
            <name name-style="western">
              <surname>Mirjalili</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Chakrabartty</surname>
              <given-names>DK</given-names>
            </name>
          </person-group>
          <article-title>Automatic screening of COVID-19 using an optimized generative adversarial network</article-title>
          <source>Cognit Comput</source>
          <year>2021</year>
          <month>01</month>
          <day>25</day>
          <fpage>1</fpage>
          <lpage>16</lpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="http://europepmc.org/abstract/MED/33520007"/>
          </comment>
          <pub-id pub-id-type="doi">10.1007/s12559-020-09785-7</pub-id>
          <pub-id pub-id-type="medline">33520007</pub-id>
          <pub-id pub-id-type="pii">9785</pub-id>
          <pub-id pub-id-type="pmcid">PMC7829098</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref24">
        <label>24</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Loey</surname>
              <given-names>M</given-names>
            </name>
            <name name-style="western">
              <surname>Manogaran</surname>
              <given-names>G</given-names>
            </name>
            <name name-style="western">
              <surname>Khalifa</surname>
              <given-names>NEM</given-names>
            </name>
          </person-group>
          <article-title>A deep transfer learning model with classical data augmentation and CGAN to detect COVID-19 from chest CT radiography digital images</article-title>
          <source>Neural Comput Appl</source>
          <year>2020</year>
          <month>10</month>
          <day>26</day>
          <fpage>1</fpage>
          <lpage>13</lpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="http://europepmc.org/abstract/MED/33132536"/>
          </comment>
          <pub-id pub-id-type="doi">10.1007/s00521-020-05437-x</pub-id>
          <pub-id pub-id-type="medline">33132536</pub-id>
          <pub-id pub-id-type="pii">5437</pub-id>
          <pub-id pub-id-type="pmcid">PMC7586204</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref25">
        <label>25</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Karakanis</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Leontidis</surname>
              <given-names>G</given-names>
            </name>
          </person-group>
          <article-title>Lightweight deep learning models for detecting COVID-19 from chest X-ray images</article-title>
          <source>Comput Biol Med</source>
          <year>2021</year>
          <month>03</month>
          <volume>130</volume>
          <fpage>104181</fpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="http://europepmc.org/abstract/MED/33360271"/>
          </comment>
          <pub-id pub-id-type="doi">10.1016/j.compbiomed.2020.104181</pub-id>
          <pub-id pub-id-type="medline">33360271</pub-id>
          <pub-id pub-id-type="pii">S0010-4825(20)30512-6</pub-id>
          <pub-id pub-id-type="pmcid">PMC7831681</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref26">
        <label>26</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Li</surname>
              <given-names>Z</given-names>
            </name>
            <name name-style="western">
              <surname>Zhang</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Li</surname>
              <given-names>B</given-names>
            </name>
            <name name-style="western">
              <surname>Gu</surname>
              <given-names>X</given-names>
            </name>
            <name name-style="western">
              <surname>Luo</surname>
              <given-names>X</given-names>
            </name>
          </person-group>
          <article-title>COVID-19 diagnosis on CT scan images using a generative adversarial network and concatenated feature pyramid network with an attention mechanism</article-title>
          <source>Med Phys</source>
          <year>2021</year>
          <month>08</month>
          <day>09</day>
          <volume>48</volume>
          <issue>8</issue>
          <fpage>4334</fpage>
          <lpage>4349</lpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="http://europepmc.org/abstract/MED/34117783"/>
          </comment>
          <pub-id pub-id-type="doi">10.1002/mp.15044</pub-id>
          <pub-id pub-id-type="medline">34117783</pub-id>
          <pub-id pub-id-type="pmcid">PMC8420535</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref27">
        <label>27</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Zhang</surname>
              <given-names>L</given-names>
            </name>
            <name name-style="western">
              <surname>Shen</surname>
              <given-names>B</given-names>
            </name>
            <name name-style="western">
              <surname>Barnawi</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>Xi</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Kumar</surname>
              <given-names>N</given-names>
            </name>
            <name name-style="western">
              <surname>Wu</surname>
              <given-names>Y</given-names>
            </name>
          </person-group>
          <article-title>FedDPGAN: federated differentially private generative adversarial networks framework for the detection of COVID-19 pneumonia</article-title>
          <source>Inf Syst Front</source>
          <year>2021</year>
          <month>06</month>
          <day>15</day>
          <volume>23</volume>
          <issue>6</issue>
          <fpage>1403</fpage>
          <lpage>1415</lpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="http://europepmc.org/abstract/MED/34149305"/>
          </comment>
          <pub-id pub-id-type="doi">10.1007/s10796-021-10144-6</pub-id>
          <pub-id pub-id-type="medline">34149305</pub-id>
          <pub-id pub-id-type="pii">10144</pub-id>
          <pub-id pub-id-type="pmcid">PMC8204125</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref28">
        <label>28</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Rasheed</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Hameed</surname>
              <given-names>AA</given-names>
            </name>
            <name name-style="western">
              <surname>Djeddi</surname>
              <given-names>C</given-names>
            </name>
            <name name-style="western">
              <surname>Jamil</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>Al-Turjman</surname>
              <given-names>F</given-names>
            </name>
          </person-group>
          <article-title>A machine learning-based framework for diagnosis of COVID-19 from chest X-ray images</article-title>
          <source>Interdiscip Sci</source>
          <year>2021</year>
          <month>03</month>
          <day>02</day>
          <volume>13</volume>
          <issue>1</issue>
          <fpage>103</fpage>
          <lpage>117</lpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="http://europepmc.org/abstract/MED/33387306"/>
          </comment>
          <pub-id pub-id-type="doi">10.1007/s12539-020-00403-6</pub-id>
          <pub-id pub-id-type="medline">33387306</pub-id>
          <pub-id pub-id-type="pii">10.1007/s12539-020-00403-6</pub-id>
          <pub-id pub-id-type="pmcid">PMC7776293</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref29">
        <label>29</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Morís</surname>
              <given-names>DI</given-names>
            </name>
            <name name-style="western">
              <surname>de Moura Ramos</surname>
              <given-names>JJ</given-names>
            </name>
            <name name-style="western">
              <surname>Buján</surname>
              <given-names>JN</given-names>
            </name>
            <name name-style="western">
              <surname>Hortas</surname>
              <given-names>MO</given-names>
            </name>
          </person-group>
          <article-title>Data augmentation approaches using cycle-consistent adversarial networks for improving COVID-19 screening in portable chest X-ray images</article-title>
          <source>Expert Syst Appl</source>
          <year>2021</year>
          <month>12</month>
          <day>15</day>
          <volume>185</volume>
          <fpage>115681</fpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="http://europepmc.org/abstract/MED/34366577"/>
          </comment>
          <pub-id pub-id-type="doi">10.1016/j.eswa.2021.115681</pub-id>
          <pub-id pub-id-type="medline">34366577</pub-id>
          <pub-id pub-id-type="pii">S0957-4174(21)01066-6</pub-id>
          <pub-id pub-id-type="pmcid">PMC8325379</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref30">
        <label>30</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Zhang</surname>
              <given-names>Q</given-names>
            </name>
            <name name-style="western">
              <surname>Chen</surname>
              <given-names>Z</given-names>
            </name>
            <name name-style="western">
              <surname>Liu</surname>
              <given-names>G</given-names>
            </name>
            <name name-style="western">
              <surname>Zhang</surname>
              <given-names>W</given-names>
            </name>
            <name name-style="western">
              <surname>Du</surname>
              <given-names>Q</given-names>
            </name>
            <name name-style="western">
              <surname>Tan</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Gao</surname>
              <given-names>Q</given-names>
            </name>
          </person-group>
          <article-title>Artificial intelligence clinicians can use chest computed tomography technology to automatically diagnose coronavirus disease 2019 (COVID-19) pneumonia and enhance low-quality images</article-title>
          <source>IDR</source>
          <year>2021</year>
          <month>02</month>
          <volume>Volume 14</volume>
          <fpage>671</fpage>
          <lpage>687</lpage>
          <pub-id pub-id-type="doi">10.2147/idr.s296346</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref31">
        <label>31</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Singh</surname>
              <given-names>RK</given-names>
            </name>
            <name name-style="western">
              <surname>Pandey</surname>
              <given-names>R</given-names>
            </name>
            <name name-style="western">
              <surname>Babu</surname>
              <given-names>RN</given-names>
            </name>
          </person-group>
          <article-title>COVIDScreen: explainable deep learning framework for differential diagnosis of COVID-19 using chest X-rays</article-title>
          <source>Neural Comput Appl</source>
          <year>2021</year>
          <month>01</month>
          <day>08</day>
          <volume>33</volume>
          <issue>14</issue>
          <fpage>8871</fpage>
          <lpage>8892</lpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="http://europepmc.org/abstract/MED/33437132"/>
          </comment>
          <pub-id pub-id-type="doi">10.1007/s00521-020-05636-6</pub-id>
          <pub-id pub-id-type="medline">33437132</pub-id>
          <pub-id pub-id-type="pii">5636</pub-id>
          <pub-id pub-id-type="pmcid">PMC7791540</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref32">
        <label>32</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Karbhari</surname>
              <given-names>Y</given-names>
            </name>
            <name name-style="western">
              <surname>Basu</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>Geem</surname>
              <given-names>ZW</given-names>
            </name>
            <name name-style="western">
              <surname>Han</surname>
              <given-names>G</given-names>
            </name>
            <name name-style="western">
              <surname>Sarkar</surname>
              <given-names>R</given-names>
            </name>
          </person-group>
          <article-title>Generation of synthetic chest X-ray images and detection of COVID-19: a deep learning based approach</article-title>
          <source>Diagnostics (Basel)</source>
          <year>2021</year>
          <month>05</month>
          <day>18</day>
          <volume>11</volume>
          <issue>5</issue>
          <fpage>895</fpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://www.mdpi.com/resolver?pii=diagnostics11050895"/>
          </comment>
          <pub-id pub-id-type="doi">10.3390/diagnostics11050895</pub-id>
          <pub-id pub-id-type="medline">34069841</pub-id>
          <pub-id pub-id-type="pii">diagnostics11050895</pub-id>
          <pub-id pub-id-type="pmcid">PMC8157360</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref33">
        <label>33</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Amin</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Sharif</surname>
              <given-names>M</given-names>
            </name>
            <name name-style="western">
              <surname>Gul</surname>
              <given-names>N</given-names>
            </name>
            <name name-style="western">
              <surname>Kadry</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Chakraborty</surname>
              <given-names>C</given-names>
            </name>
          </person-group>
          <article-title>Quantum machine learning architecture for COVID-19 classification based on synthetic data generation using conditional adversarial neural network</article-title>
          <source>Cognit Comput</source>
          <year>2021</year>
          <month>08</month>
          <day>10</day>
          <fpage>1</fpage>
          <lpage>12</lpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="http://europepmc.org/abstract/MED/34394762"/>
          </comment>
          <pub-id pub-id-type="doi">10.1007/s12559-021-09926-6</pub-id>
          <pub-id pub-id-type="medline">34394762</pub-id>
          <pub-id pub-id-type="pii">9926</pub-id>
          <pub-id pub-id-type="pmcid">PMC8353617</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref34">
        <label>34</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Zhang</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Yu</surname>
              <given-names>L</given-names>
            </name>
            <name name-style="western">
              <surname>Chen</surname>
              <given-names>D</given-names>
            </name>
            <name name-style="western">
              <surname>Pan</surname>
              <given-names>W</given-names>
            </name>
            <name name-style="western">
              <surname>Shi</surname>
              <given-names>C</given-names>
            </name>
            <name name-style="western">
              <surname>Niu</surname>
              <given-names>Y</given-names>
            </name>
            <name name-style="western">
              <surname>Yao</surname>
              <given-names>X</given-names>
            </name>
            <name name-style="western">
              <surname>Xu</surname>
              <given-names>X</given-names>
            </name>
            <name name-style="western">
              <surname>Cheng</surname>
              <given-names>Y</given-names>
            </name>
          </person-group>
          <article-title>Dense GAN and multi-layer attention based lesion segmentation method for COVID-19 CT images</article-title>
          <source>Biomed Signal Process Control</source>
          <year>2021</year>
          <month>08</month>
          <volume>69</volume>
          <fpage>102901</fpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="http://europepmc.org/abstract/MED/34178095"/>
          </comment>
          <pub-id pub-id-type="doi">10.1016/j.bspc.2021.102901</pub-id>
          <pub-id pub-id-type="medline">34178095</pub-id>
          <pub-id pub-id-type="pii">S1746-8094(21)00498-5</pub-id>
          <pub-id pub-id-type="pmcid">PMC8220920</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref35">
        <label>35</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Hernandez-Cruz</surname>
              <given-names>N</given-names>
            </name>
            <name name-style="western">
              <surname>Cato</surname>
              <given-names>D</given-names>
            </name>
            <name name-style="western">
              <surname>Favela</surname>
              <given-names>J</given-names>
            </name>
          </person-group>
          <article-title>Neural style transfer as data augmentation for improving COVID-19 diagnosis classification</article-title>
          <source>SN Comput Sci</source>
          <year>2021</year>
          <month>08</month>
          <day>13</day>
          <volume>2</volume>
          <issue>5</issue>
          <fpage>410</fpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="http://europepmc.org/abstract/MED/34405153"/>
          </comment>
          <pub-id pub-id-type="doi">10.1007/s42979-021-00795-2</pub-id>
          <pub-id pub-id-type="medline">34405153</pub-id>
          <pub-id pub-id-type="pii">795</pub-id>
          <pub-id pub-id-type="pmcid">PMC8361825</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref36">
        <label>36</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Jiang</surname>
              <given-names>H</given-names>
            </name>
            <name name-style="western">
              <surname>Tang</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Liu</surname>
              <given-names>W</given-names>
            </name>
            <name name-style="western">
              <surname>Zhang</surname>
              <given-names>Y</given-names>
            </name>
          </person-group>
          <article-title>Deep learning for COVID-19 chest CT (computed tomography) image analysis: a lesson from lung cancer</article-title>
          <source>Comput Struct Biotechnol J</source>
          <year>2021</year>
          <volume>19</volume>
          <fpage>1391</fpage>
          <lpage>1399</lpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://linkinghub.elsevier.com/retrieve/pii/S2001-0370(21)00067-2"/>
          </comment>
          <pub-id pub-id-type="doi">10.1016/j.csbj.2021.02.016</pub-id>
          <pub-id pub-id-type="medline">33680351</pub-id>
          <pub-id pub-id-type="pii">S2001-0370(21)00067-2</pub-id>
          <pub-id pub-id-type="pmcid">PMC7923948</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref37">
        <label>37</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Bhattacharyya</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>Bhaik</surname>
              <given-names>D</given-names>
            </name>
            <name name-style="western">
              <surname>Kumar</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Thakur</surname>
              <given-names>P</given-names>
            </name>
            <name name-style="western">
              <surname>Sharma</surname>
              <given-names>R</given-names>
            </name>
            <name name-style="western">
              <surname>Pachori</surname>
              <given-names>RB</given-names>
            </name>
          </person-group>
          <article-title>A deep learning based approach for automatic detection of COVID-19 cases using chest X-ray images</article-title>
          <source>Biomed Signal Process Control</source>
          <year>2022</year>
          <month>01</month>
          <volume>71</volume>
          <fpage>103182</fpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="http://europepmc.org/abstract/MED/34580596"/>
          </comment>
          <pub-id pub-id-type="doi">10.1016/j.bspc.2021.103182</pub-id>
          <pub-id pub-id-type="medline">34580596</pub-id>
          <pub-id pub-id-type="pii">S1746-8094(21)00779-5</pub-id>
          <pub-id pub-id-type="pmcid">PMC8457928</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref38">
        <label>38</label>
        <nlm-citation citation-type="confproc">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Mann</surname>
              <given-names>P</given-names>
            </name>
            <name name-style="western">
              <surname>Jain</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Mittal</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>Bhat</surname>
              <given-names>A</given-names>
            </name>
          </person-group>
          <article-title>Generation of COVID-19 chest CT scan images using generative adversarial networks</article-title>
          <year>2021</year>
          <conf-name>2021 International Conference on Intelligent Technologies (CONIT)</conf-name>
          <conf-date>June 25-27, 2021</conf-date>
          <conf-loc>Hubbali, Karnataka, India</conf-loc>
          <fpage>1</fpage>
          <lpage>5</lpage>
          <pub-id pub-id-type="doi">10.1109/conit51480.2021.9498272</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref39">
        <label>39</label>
        <nlm-citation citation-type="confproc">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Quan</surname>
              <given-names>T</given-names>
            </name>
            <name name-style="western">
              <surname>Thanh</surname>
              <given-names>H</given-names>
            </name>
            <name name-style="western">
              <surname>Huy</surname>
              <given-names>T</given-names>
            </name>
            <name name-style="western">
              <surname>Chanh</surname>
              <given-names>N</given-names>
            </name>
            <name name-style="western">
              <surname>Anh</surname>
              <given-names>N</given-names>
            </name>
            <name name-style="western">
              <surname>Vu</surname>
              <given-names>P</given-names>
            </name>
            <name name-style="western">
              <surname>Nam</surname>
              <given-names>N</given-names>
            </name>
            <name name-style="western">
              <surname>Tuong</surname>
              <given-names>T</given-names>
            </name>
            <name name-style="western">
              <surname>Dien</surname>
              <given-names>V</given-names>
            </name>
            <name name-style="western">
              <surname>Van</surname>
              <given-names>GB</given-names>
            </name>
            <name name-style="western">
              <surname>Trung</surname>
              <given-names>B</given-names>
            </name>
          </person-group>
          <article-title>XPGAN: X-ray projected generative adversarial network for improving COVID-19 image classification</article-title>
          <year>2021</year>
          <conf-name>2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)</conf-name>
          <conf-date>2021</conf-date>
          <conf-loc>Nice, France</conf-loc>
          <fpage>1509</fpage>
          <lpage>1513</lpage>
          <pub-id pub-id-type="doi">10.1109/isbi48211.2021.9434159</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref40">
        <label>40</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Waheed</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>Goyal</surname>
              <given-names>M</given-names>
            </name>
            <name name-style="western">
              <surname>Gupta</surname>
              <given-names>D</given-names>
            </name>
            <name name-style="western">
              <surname>Khanna</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>Al-Turjman</surname>
              <given-names>F</given-names>
            </name>
            <name name-style="western">
              <surname>Pinheiro</surname>
              <given-names>PR</given-names>
            </name>
          </person-group>
          <article-title>CovidGAN: data augmentation using auxiliary classifier GAN for improved COVID-19 detection</article-title>
          <source>IEEE Access</source>
          <year>2020</year>
          <volume>8</volume>
          <fpage>91916</fpage>
          <lpage>91923</lpage>
          <pub-id pub-id-type="doi">10.1109/access.2020.2994762</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref41">
        <label>41</label>
        <nlm-citation citation-type="confproc">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Liang</surname>
              <given-names>Z</given-names>
            </name>
            <name name-style="western">
              <surname>Huang</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Li</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Chan</surname>
              <given-names>S</given-names>
            </name>
          </person-group>
          <article-title>Enhancing automated COVID-19 chest X-ray diagnosis by image-to-image GAN translation</article-title>
          <year>2020</year>
          <conf-name>2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)</conf-name>
          <conf-date>December 16-19, 2020</conf-date>
          <conf-loc>Seoul, South Korea</conf-loc>
          <fpage>1068</fpage>
          <lpage>1071</lpage>
          <pub-id pub-id-type="doi">10.1109/bibm49941.2020.9313466</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref42">
        <label>42</label>
        <nlm-citation citation-type="confproc">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Morís</surname>
              <given-names>D</given-names>
            </name>
            <name name-style="western">
              <surname>de</surname>
              <given-names>MJ</given-names>
            </name>
            <name name-style="western">
              <surname>Novo</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Ortega</surname>
              <given-names>M</given-names>
            </name>
          </person-group>
          <article-title>Cycle generative adversarial network approaches to produce novel portable chest X-rays images for COVID-19 diagnosis</article-title>
          <year>2021</year>
          <conf-name>2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</conf-name>
          <conf-date>June 6-11, 2021</conf-date>
          <conf-loc>Toronto, Ontario, Canada</conf-loc>
          <fpage>1060</fpage>
          <lpage>1064</lpage>
          <pub-id pub-id-type="doi">10.1109/icassp39728.2021.9414031</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref43">
        <label>43</label>
        <nlm-citation citation-type="confproc">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Rodríguez-de-la-Cruz</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Acosta-Mesa</surname>
              <given-names>H</given-names>
            </name>
            <name name-style="western">
              <surname>Mezura-Montes</surname>
              <given-names>E</given-names>
            </name>
          </person-group>
          <article-title>Evolution of generative adversarial networks using pso for synthesis of covid-19 chest x-ray images</article-title>
          <year>2021</year>
          <conf-name>2021 IEEE Congress on Evolutionary Computation (CEC) 2021 Jun 28 (pp. ). IEEE</conf-name>
          <conf-date>2021</conf-date>
          <conf-loc>Kraków, Poland</conf-loc>
          <fpage>2226</fpage>
          <lpage>2233</lpage>
          <pub-id pub-id-type="doi">10.1109/cec45853.2021.9504743</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref44">
        <label>44</label>
        <nlm-citation citation-type="confproc">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Nneji</surname>
              <given-names>G</given-names>
            </name>
            <name name-style="western">
              <surname>Cai</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Jianhua</surname>
              <given-names>D</given-names>
            </name>
            <name name-style="western">
              <surname>Monday</surname>
              <given-names>H</given-names>
            </name>
            <name name-style="western">
              <surname>Chikwendu</surname>
              <given-names>I</given-names>
            </name>
            <name name-style="western">
              <surname>Oluwasanmi</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>James</surname>
              <given-names>E</given-names>
            </name>
            <name name-style="western">
              <surname>Mgbejime</surname>
              <given-names>G</given-names>
            </name>
          </person-group>
          <article-title>Enhancing low quality in radiograph datasets using wavelet transform convolutional neural network and generative adversarial network for COVID-19 identification</article-title>
          <year>2021</year>
          <conf-name>2021 4th International Conference on Pattern Recognition and Artificial Intelligence (PRAI)</conf-name>
          <conf-date>August 20-22, 2021</conf-date>
          <conf-loc>Yibin, China</conf-loc>
          <fpage>146</fpage>
          <lpage>151</lpage>
          <pub-id pub-id-type="doi">10.1109/prai53619.2021.9551043</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref45">
        <label>45</label>
        <nlm-citation citation-type="confproc">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Menon</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Galita</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Chapman</surname>
              <given-names>D</given-names>
            </name>
            <name name-style="western">
              <surname>Gangopadhyay</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>Mangalagiri</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Nguyen</surname>
              <given-names>P</given-names>
            </name>
            <name name-style="western">
              <surname>Yesha</surname>
              <given-names>Y</given-names>
            </name>
            <name name-style="western">
              <surname>Yesha</surname>
              <given-names>Y</given-names>
            </name>
            <name name-style="western">
              <surname>Saboury</surname>
              <given-names>B</given-names>
            </name>
            <name name-style="western">
              <surname>Morris</surname>
              <given-names>M</given-names>
            </name>
          </person-group>
          <article-title>Generating realistic COVID-19 x-rays with a mean teacher+ transfer learning GAN</article-title>
          <year>2020</year>
          <conf-name>2020 IEEE International Conference on Big Data (Big Data)</conf-name>
          <conf-date>December 10-13, 2020</conf-date>
          <conf-loc>Virtual</conf-loc>
          <fpage>1216</fpage>
          <lpage>1225</lpage>
          <pub-id pub-id-type="doi">10.1109/bigdata50022.2020.9377878</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref46">
        <label>46</label>
        <nlm-citation citation-type="confproc">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Dong</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Zhang</surname>
              <given-names>Z</given-names>
            </name>
          </person-group>
          <article-title>Joint optimization of cycleGAN and CNN classifier for COVID-19 detection and biomarker localization</article-title>
          <year>2020</year>
          <conf-name>2020 IEEE International Conference on Progress in Informatics and Computing (PIC)</conf-name>
          <conf-date>December 17-19, 2021</conf-date>
          <conf-loc>Shanghai, China</conf-loc>
          <fpage>112</fpage>
          <lpage>118</lpage>
          <pub-id pub-id-type="doi">10.1109/pic50277.2020.9350813</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref47">
        <label>47</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Yadav</surname>
              <given-names>P</given-names>
            </name>
            <name name-style="western">
              <surname>Menon</surname>
              <given-names>N</given-names>
            </name>
            <name name-style="western">
              <surname>Ravi</surname>
              <given-names>V</given-names>
            </name>
            <name name-style="western">
              <surname>Vishvanathan</surname>
              <given-names>S</given-names>
            </name>
          </person-group>
          <article-title>Lung-GANs: unsupervised representation learning for lung disease classification using chest CT and X-ray images</article-title>
          <source>IEEE Trans Eng Manag</source>
          <year>2021</year>
          <fpage>1</fpage>
          <lpage>13</lpage>
          <pub-id pub-id-type="doi">10.1109/tem.2021.3103334</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref48">
        <label>48</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Yang</surname>
              <given-names>Z</given-names>
            </name>
            <name name-style="western">
              <surname>Zhao</surname>
              <given-names>L</given-names>
            </name>
            <name name-style="western">
              <surname>Wu</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Chen</surname>
              <given-names>CY</given-names>
            </name>
          </person-group>
          <article-title>Lung lesion localization of COVID-19 from chest CT image: a novel weakly supervised learning method</article-title>
          <source>IEEE J Biomed Health Inform</source>
          <year>2021</year>
          <month>6</month>
          <volume>25</volume>
          <issue>6</issue>
          <fpage>1864</fpage>
          <lpage>1872</lpage>
          <pub-id pub-id-type="doi">10.1109/jbhi.2021.3067465</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref49">
        <label>49</label>
        <nlm-citation citation-type="confproc">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Mangalagiri</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Chapman</surname>
              <given-names>D</given-names>
            </name>
            <name name-style="western">
              <surname>Gangopadhyay</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>Yesha</surname>
              <given-names>Y</given-names>
            </name>
            <name name-style="western">
              <surname>Galita</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Menon</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Yesha</surname>
              <given-names>Y</given-names>
            </name>
            <name name-style="western">
              <surname>Saboury</surname>
              <given-names>B</given-names>
            </name>
            <name name-style="western">
              <surname>Morris</surname>
              <given-names>M</given-names>
            </name>
            <name name-style="western">
              <surname>Nguyen</surname>
              <given-names>P</given-names>
            </name>
          </person-group>
          <article-title>Toward generating synthetic CT volumes using a 3D-conditional generative adversarial network</article-title>
          <year>2020</year>
          <conf-name>2020 International Conference on Computational Science and Computational Intelligence (CSCI)</conf-name>
          <conf-date>December 16-18, 2020</conf-date>
          <conf-loc>Las Vegas, NV</conf-loc>
          <fpage>858</fpage>
          <lpage>862</lpage>
          <pub-id pub-id-type="doi">10.1109/csci51800.2020.00160</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref50">
        <label>50</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Sakib</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Tazrin</surname>
              <given-names>T</given-names>
            </name>
            <name name-style="western">
              <surname>Fouda</surname>
              <given-names>MM</given-names>
            </name>
            <name name-style="western">
              <surname>Fadlullah</surname>
              <given-names>ZM</given-names>
            </name>
            <name name-style="western">
              <surname>Guizani</surname>
              <given-names>M</given-names>
            </name>
          </person-group>
          <article-title>DL-CRC: deep learning-based chest radiograph classification for COVID-19 detection: a novel approach</article-title>
          <source>IEEE Access</source>
          <year>2020</year>
          <volume>8</volume>
          <fpage>171575</fpage>
          <lpage>171589</lpage>
          <pub-id pub-id-type="doi">10.1109/access.2020.3025010</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref51">
        <label>51</label>
        <nlm-citation citation-type="confproc">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Yang</surname>
              <given-names>Y</given-names>
            </name>
            <name name-style="western">
              <surname>Chen</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Wang</surname>
              <given-names>R</given-names>
            </name>
            <name name-style="western">
              <surname>Ma</surname>
              <given-names>T</given-names>
            </name>
            <name name-style="western">
              <surname>Wang</surname>
              <given-names>L</given-names>
            </name>
            <name name-style="western">
              <surname>Chen</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Zheng</surname>
              <given-names>W</given-names>
            </name>
            <name name-style="western">
              <surname>Zhang</surname>
              <given-names>T</given-names>
            </name>
          </person-group>
          <article-title>Towards unbiased COVID-19 lesion localisation and segmentation via weakly supervised learning</article-title>
          <year>2021</year>
          <conf-name>2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)</conf-name>
          <conf-date>April 13-16, 2021</conf-date>
          <conf-loc>Virtual</conf-loc>
          <fpage>1966</fpage>
          <lpage>1970</lpage>
          <pub-id pub-id-type="doi">10.1109/isbi48211.2021.9433806</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref52">
        <label>52</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Loey</surname>
              <given-names>M</given-names>
            </name>
            <name name-style="western">
              <surname>Smarandache</surname>
              <given-names>F</given-names>
            </name>
            <name name-style="western">
              <surname>M. Khalifa</surname>
              <given-names>NE</given-names>
            </name>
          </person-group>
          <article-title>Within the lack of chest COVID-19 X-ray dataset: a novel detection model based on GAN and deep transfer learning</article-title>
          <source>Symmetry</source>
          <year>2020</year>
          <month>04</month>
          <day>20</day>
          <volume>12</volume>
          <issue>4</issue>
          <fpage>651</fpage>
          <pub-id pub-id-type="doi">10.3390/sym12040651</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref53">
        <label>53</label>
        <nlm-citation citation-type="book">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Khalifa</surname>
              <given-names>N</given-names>
            </name>
            <name name-style="western">
              <surname>Taha</surname>
              <given-names>M</given-names>
            </name>
            <name name-style="western">
              <surname>Hassanien</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>Taha</surname>
              <given-names>S</given-names>
            </name>
          </person-group>
          <article-title>The detection of COVID-19 in CT medical images: a deep learning approach</article-title>
          <source>Big Data Analytics and Artificial Intelligence against COVID-19: Innovation Vision and Approach</source>
          <year>2020</year>
          <publisher-loc>Cham</publisher-loc>
          <publisher-name>Springer</publisher-name>
          <fpage>73</fpage>
          <lpage>90</lpage>
        </nlm-citation>
      </ref>
      <ref id="ref54">
        <label>54</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Zunair</surname>
              <given-names>H</given-names>
            </name>
            <name name-style="western">
              <surname>Hamza</surname>
              <given-names>AB</given-names>
            </name>
          </person-group>
          <article-title>Synthesis of COVID-19 chest X-rays using unpaired image-to-image translation</article-title>
          <source>Soc Netw Anal Min</source>
          <year>2021</year>
          <month>02</month>
          <day>24</day>
          <volume>11</volume>
          <issue>1</issue>
          <fpage>23</fpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="http://europepmc.org/abstract/MED/33643491"/>
          </comment>
          <pub-id pub-id-type="doi">10.1007/s13278-021-00731-5</pub-id>
          <pub-id pub-id-type="medline">33643491</pub-id>
          <pub-id pub-id-type="pii">731</pub-id>
          <pub-id pub-id-type="pmcid">PMC7903408</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref55">
        <label>55</label>
        <nlm-citation citation-type="book">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Sachdev</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Bhatnagar</surname>
              <given-names>N</given-names>
            </name>
            <name name-style="western">
              <surname>Bhatnagar</surname>
              <given-names>R</given-names>
            </name>
          </person-group>
          <article-title>Deep learning models using auxiliary classifier GAN for COVID-19 detection–a comparative study</article-title>
          <source>The International Conference on Artificial Intelligence and Computer Vision</source>
          <year>2021</year>
          <publisher-loc>Cham</publisher-loc>
          <publisher-name>Springer</publisher-name>
          <fpage>12</fpage>
          <lpage>23</lpage>
        </nlm-citation>
      </ref>
      <ref id="ref56">
        <label>56</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Morís</surname>
              <given-names>D</given-names>
            </name>
            <name name-style="western">
              <surname>de</surname>
              <given-names>MJ</given-names>
            </name>
            <name name-style="western">
              <surname>Novo</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Ortega</surname>
              <given-names>M</given-names>
            </name>
          </person-group>
          <article-title>Portable chest X-ray synthetic image generation for the COVID-19 screening</article-title>
          <source>Eng Proc</source>
          <year>2021</year>
          <volume>7</volume>
          <issue>1</issue>
          <fpage>6</fpage>
          <pub-id pub-id-type="doi">10.3390/engproc2021007006</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref57">
        <label>57</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Munawar</surname>
              <given-names>F</given-names>
            </name>
            <name name-style="western">
              <surname>Azmat</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Iqbal</surname>
              <given-names>T</given-names>
            </name>
            <name name-style="western">
              <surname>Gronlund</surname>
              <given-names>C</given-names>
            </name>
            <name name-style="western">
              <surname>Ali</surname>
              <given-names>H</given-names>
            </name>
          </person-group>
          <article-title>Segmentation of lungs in chest X-ray image using generative adversarial networks</article-title>
          <source>IEEE Access</source>
          <year>2020</year>
          <volume>8</volume>
          <fpage>153535</fpage>
          <lpage>153545</lpage>
          <pub-id pub-id-type="doi">10.1109/access.2020.3017915</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref58">
        <label>58</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Oluwasanmi</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>Aftab</surname>
              <given-names>MU</given-names>
            </name>
            <name name-style="western">
              <surname>Qin</surname>
              <given-names>Z</given-names>
            </name>
            <name name-style="western">
              <surname>Ngo</surname>
              <given-names>ST</given-names>
            </name>
            <name name-style="western">
              <surname>Doan</surname>
              <given-names>TV</given-names>
            </name>
            <name name-style="western">
              <surname>Nguyen</surname>
              <given-names>SB</given-names>
            </name>
            <name name-style="western">
              <surname>Nguyen</surname>
              <given-names>SH</given-names>
            </name>
          </person-group>
          <article-title>Transfer learning and semisupervised adversarial detection and classification of COVID-19 in CT images</article-title>
          <source>Complexity</source>
          <year>2021</year>
          <month>2</month>
          <day>13</day>
          <volume>2021</volume>
          <fpage>1</fpage>
          <lpage>11</lpage>
          <pub-id pub-id-type="doi">10.1155/2021/6680455</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref59">
        <label>59</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Sanajalwe</surname>
              <given-names>Y</given-names>
            </name>
            <name name-style="western">
              <surname>Anbar</surname>
              <given-names>M</given-names>
            </name>
            <name name-style="western">
              <surname>Al-E'mari</surname>
              <given-names>S</given-names>
            </name>
          </person-group>
          <article-title>COVID-19 automatic detection using deep learning</article-title>
          <source>Comput Syst Sci Eng</source>
          <year>2021</year>
          <month>1</month>
          <day>1</day>
          <volume>39</volume>
          <issue>1</issue>
          <fpage>15</fpage>
          <lpage>35</lpage>
          <pub-id pub-id-type="doi">10.32604/csse.2021.017191</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref60">
        <label>60</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Al-Shargabi</surname>
              <given-names>AA</given-names>
            </name>
            <name name-style="western">
              <surname>Alshobaili</surname>
              <given-names>JF</given-names>
            </name>
            <name name-style="western">
              <surname>Alabdulatif</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>Alrobah</surname>
              <given-names>N</given-names>
            </name>
          </person-group>
          <article-title>COVID-CGAN: efficient deep learning approach for COVID-19 detection based on CXR images using conditional GANs</article-title>
          <source>Appl Sci</source>
          <year>2021</year>
          <month>08</month>
          <day>04</day>
          <volume>11</volume>
          <issue>16</issue>
          <fpage>7174</fpage>
          <pub-id pub-id-type="doi">10.3390/app11167174</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref61">
        <label>61</label>
        <nlm-citation citation-type="confproc">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Shivadekar</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Mangalagiri</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Nguyen</surname>
              <given-names>P</given-names>
            </name>
            <name name-style="western">
              <surname>Chapman</surname>
              <given-names>D</given-names>
            </name>
            <name name-style="western">
              <surname>Halem</surname>
              <given-names>M</given-names>
            </name>
            <name name-style="western">
              <surname>Gite</surname>
              <given-names>R</given-names>
            </name>
          </person-group>
          <article-title>An intelligent parallel distributed streaming framework for near real-time science sensors and high-resolution medical images</article-title>
          <year>2021</year>
          <conf-name>50th International Conference on Parallel Processing Workshop</conf-name>
          <conf-date>August 9-12, 2021</conf-date>
          <conf-loc>Chicago, IL</conf-loc>
          <fpage>1</fpage>
          <lpage>9</lpage>
          <pub-id pub-id-type="doi">10.1145/3458744.3474039</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref62">
        <label>62</label>
        <nlm-citation citation-type="confproc">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Jamal</surname>
              <given-names>TO</given-names>
            </name>
            <name name-style="western">
              <surname>O'Reilly</surname>
              <given-names>UM</given-names>
            </name>
          </person-group>
          <article-title>Signal propagation in a gradient-basedevolutionary learning system</article-title>
          <year>2021</year>
          <conf-name>Proceedings of the Genetic and Evolutionary Computation Conference (GECCO '21)</conf-name>
          <conf-date>July 10-14, 2021</conf-date>
          <conf-loc>Lille, France</conf-loc>
          <pub-id pub-id-type="doi">10.1145/3449639.3459319</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref63">
        <label>63</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Acar</surname>
              <given-names>E</given-names>
            </name>
            <name name-style="western">
              <surname>Şahin</surname>
              <given-names>E</given-names>
            </name>
            <name name-style="western">
              <surname>Yılmaz</surname>
              <given-names>İ</given-names>
            </name>
          </person-group>
          <article-title>Improving effectiveness of different deep learning-based models for detecting COVID-19 from computed tomography (CT) images</article-title>
          <source>Neural Comput Appl</source>
          <year>2021</year>
          <month>07</month>
          <day>29</day>
          <volume>33</volume>
          <issue>24</issue>
          <fpage>1</fpage>
          <lpage>21</lpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="http://europepmc.org/abstract/MED/34345118"/>
          </comment>
          <pub-id pub-id-type="doi">10.1007/s00521-021-06344-5</pub-id>
          <pub-id pub-id-type="medline">34345118</pub-id>
          <pub-id pub-id-type="pii">6344</pub-id>
          <pub-id pub-id-type="pmcid">PMC8321007</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref64">
        <label>64</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Sheykhivand</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Mousavi</surname>
              <given-names>Z</given-names>
            </name>
            <name name-style="western">
              <surname>Mojtahedi</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Yousefi Rezaii</surname>
              <given-names>T</given-names>
            </name>
            <name name-style="western">
              <surname>Farzamnia</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>Meshgini</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Saad</surname>
              <given-names>I</given-names>
            </name>
          </person-group>
          <article-title>Developing an efficient deep neural network for automatic detection of COVID-19 using chest X-ray images</article-title>
          <source>Alex Eng J</source>
          <year>2021</year>
          <month>06</month>
          <volume>60</volume>
          <issue>3</issue>
          <fpage>2885</fpage>
          <lpage>2903</lpage>
          <pub-id pub-id-type="doi">10.1016/j.aej.2021.01.011</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref65">
        <label>65</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Rangarajan</surname>
              <given-names>AK</given-names>
            </name>
            <name name-style="western">
              <surname>Ramachandran</surname>
              <given-names>HK</given-names>
            </name>
          </person-group>
          <article-title>A preliminary analysis of AI based smartphone application for diagnosis of COVID-19 using chest X-ray images</article-title>
          <source>Expert Syst Appl</source>
          <year>2021</year>
          <month>11</month>
          <volume>183</volume>
          <fpage>115401</fpage>
          <pub-id pub-id-type="doi">10.1016/j.eswa.2021.115401</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref66">
        <label>66</label>
        <nlm-citation citation-type="confproc">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Li</surname>
              <given-names>H</given-names>
            </name>
            <name name-style="western">
              <surname>Hu</surname>
              <given-names>Y</given-names>
            </name>
            <name name-style="western">
              <surname>Li</surname>
              <given-names>S</given-names>
            </name>
            <name name-style="western">
              <surname>Lin</surname>
              <given-names>W</given-names>
            </name>
            <name name-style="western">
              <surname>Liu</surname>
              <given-names>P</given-names>
            </name>
            <name name-style="western">
              <surname>Higashita</surname>
              <given-names>R</given-names>
            </name>
            <name name-style="western">
              <surname>Liu</surname>
              <given-names>J</given-names>
            </name>
          </person-group>
          <article-title>CT scan synthesis for promoting computer-aided diagnosis capacity of COVID-19</article-title>
          <year>2020</year>
          <conf-name>International Conference on Intelligent Computing 2020</conf-name>
          <conf-date>October 2-5, 2020</conf-date>
          <conf-loc>Bari, Italy</conf-loc>
          <fpage>413</fpage>
          <lpage>422</lpage>
          <pub-id pub-id-type="doi">10.1007/978-3-030-60802-6_36</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref67">
        <label>67</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Zulkifley</surname>
              <given-names>MA</given-names>
            </name>
            <name name-style="western">
              <surname>Abdani</surname>
              <given-names>SR</given-names>
            </name>
            <name name-style="western">
              <surname>Zulkifley</surname>
              <given-names>NH</given-names>
            </name>
          </person-group>
          <article-title>COVID-19 screening using a lightweight convolutional neural network with generative adversarial network data augmentation</article-title>
          <source>Symmetry</source>
          <year>2020</year>
          <month>09</month>
          <day>16</day>
          <volume>12</volume>
          <issue>9</issue>
          <fpage>1530</fpage>
          <pub-id pub-id-type="doi">10.3390/sym12091530</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref68">
        <label>68</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>El-Shafai</surname>
              <given-names>W</given-names>
            </name>
            <name name-style="western">
              <surname>Ali</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>El-Rabaie</surname>
              <given-names>E</given-names>
            </name>
            <name name-style="western">
              <surname>Soliman</surname>
              <given-names>N</given-names>
            </name>
            <name name-style="western">
              <surname>Algarni</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>El-Samie</surname>
              <given-names>A</given-names>
            </name>
          </person-group>
          <article-title>Automated COVID-19 detection based on single-image super-resolution and CNN models</article-title>
          <source>Comput Mater Continua</source>
          <year>2021</year>
          <fpage>1141</fpage>
          <lpage>1157</lpage>
          <pub-id pub-id-type="doi">10.32604/cmc.2022.018547</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref69">
        <label>69</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Karar</surname>
              <given-names>M</given-names>
            </name>
            <name name-style="western">
              <surname>Shouman</surname>
              <given-names>M</given-names>
            </name>
            <name name-style="western">
              <surname>Chalopin</surname>
              <given-names>C</given-names>
            </name>
          </person-group>
          <article-title>Adversarial neural network classifiers for COVID-19 diagnosis in ultrasound images</article-title>
          <source>Comput Mater Continua</source>
          <year>2021</year>
          <fpage>1683</fpage>
          <lpage>1697</lpage>
          <pub-id pub-id-type="doi">10.32604/cmc.2022.018564</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref70">
        <label>70</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Zebin</surname>
              <given-names>T</given-names>
            </name>
            <name name-style="western">
              <surname>Rezvy</surname>
              <given-names>S</given-names>
            </name>
          </person-group>
          <article-title>COVID-19 detection and disease progression visualization: deep learning on chest X-rays for classification and coarse localization</article-title>
          <source>Appl Intell (Dordr)</source>
          <year>2021</year>
          <month>09</month>
          <day>12</day>
          <volume>51</volume>
          <issue>2</issue>
          <fpage>1010</fpage>
          <lpage>1021</lpage>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="http://europepmc.org/abstract/MED/34764549"/>
          </comment>
          <pub-id pub-id-type="doi">10.1007/s10489-020-01867-1</pub-id>
          <pub-id pub-id-type="medline">34764549</pub-id>
          <pub-id pub-id-type="pii">1867</pub-id>
          <pub-id pub-id-type="pmcid">PMC7486976</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref71">
        <label>71</label>
        <nlm-citation citation-type="confproc">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Ambita</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>Boquio</surname>
              <given-names>E</given-names>
            </name>
            <name name-style="western">
              <surname>Naval</surname>
              <given-names>P</given-names>
            </name>
          </person-group>
          <article-title>COViT-GAN: vision transformer for COVID-19 detection in CT scan images with self-attention GAN for data augmentation</article-title>
          <year>2021</year>
          <conf-name>International Conference on Artificial Neural Networks 2021</conf-name>
          <conf-date>September 2021</conf-date>
          <conf-loc>Virtual</conf-loc>
          <fpage>587</fpage>
          <lpage>598</lpage>
          <pub-id pub-id-type="doi">10.1007/978-3-030-86340-1_47</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref72">
        <label>72</label>
        <nlm-citation citation-type="book">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Elghamrawy</surname>
              <given-names>S</given-names>
            </name>
          </person-group>
          <article-title>An H2O’s deep learning-inspired model based on big data analytics for coronavirus disease (COVID-19) diagnosis</article-title>
          <source>Big Data Analytics and Artificial Intelligence against COVID-19: Innovation Vision and Approach</source>
          <year>2021</year>
          <publisher-loc>Cham</publisher-loc>
          <publisher-name>Springer</publisher-name>
          <fpage>263</fpage>
          <lpage>279</lpage>
        </nlm-citation>
      </ref>
      <ref id="ref73">
        <label>73</label>
        <nlm-citation citation-type="confproc">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Toutouh</surname>
              <given-names>J</given-names>
            </name>
            <name name-style="western">
              <surname>Esteban</surname>
              <given-names>M</given-names>
            </name>
            <name name-style="western">
              <surname>Nesmachnow</surname>
              <given-names>S</given-names>
            </name>
          </person-group>
          <article-title>Parallel/distributed generative adversarial neural networks for data augmentation of COVID-19 training images</article-title>
          <year>2020</year>
          <conf-name>Latin American High Performance Computing Conference 2020</conf-name>
          <conf-date>September 2-4, 2020</conf-date>
          <conf-loc>Virtual</conf-loc>
          <fpage>162</fpage>
          <lpage>177</lpage>
          <pub-id pub-id-type="doi">10.1007/978-3-030-68035-0_12</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref74">
        <label>74</label>
        <nlm-citation citation-type="journal">
          <person-group person-group-type="author">
            <name name-style="western">
              <surname>Bar-El</surname>
              <given-names>A</given-names>
            </name>
            <name name-style="western">
              <surname>Cohen</surname>
              <given-names>D</given-names>
            </name>
            <name name-style="western">
              <surname>Cahan</surname>
              <given-names>N</given-names>
            </name>
            <name name-style="western">
              <surname>Greenspan</surname>
              <given-names>H</given-names>
            </name>
          </person-group>
          <article-title>Improved cycleGAN with application to COVID-19 classification</article-title>
          <source>Proc SPIE</source>
          <year>2021</year>
          <volume>11596</volume>
          <fpage>1159614</fpage>
          <pub-id pub-id-type="doi">10.1117/12.2582162</pub-id>
        </nlm-citation>
      </ref>
      <ref id="ref75">
        <label>75</label>
        <nlm-citation citation-type="web">
          <source>SARS-COV-2 Ct-Scan Dataset: A Large Dataset of CT Scans for SARS-CoV-2 (COVID-19) Identification</source>
          <access-date>2022-06-22</access-date>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://www.kaggle.com/datasets/plameneduardo/sarscov2-ctscan-dataset">https://www.kaggle.com/datasets/plameneduardo/sarscov2-ctscan-dataset</ext-link>
          </comment>
        </nlm-citation>
      </ref>
      <ref id="ref76">
        <label>76</label>
        <nlm-citation citation-type="web">
          <source>COVID-CT</source>
          <access-date>2022-06-22</access-date>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://github.com/UCSD-AI4H/COVID-CT">https://github.com/UCSD-AI4H/COVID-CT</ext-link>
          </comment>
        </nlm-citation>
      </ref>
      <ref id="ref77">
        <label>77</label>
        <nlm-citation citation-type="web">
          <source>HKBU_HPML_COVID-19</source>
          <access-date>2022-06-22</access-date>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://github.com/wang-shihao/HKBU_HPML_COVID-19">https://github.com/wang-shihao/HKBU_HPML_COVID-19</ext-link>
          </comment>
        </nlm-citation>
      </ref>
      <ref id="ref78">
        <label>78</label>
        <nlm-citation citation-type="web">
          <source>covid-chestxray-dataset</source>
          <access-date>2022-06-22</access-date>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://github.com/ieee8023/covid-chestxray-dataset">https://github.com/ieee8023/covid-chestxray-dataset</ext-link>
          </comment>
        </nlm-citation>
      </ref>
      <ref id="ref79">
        <label>79</label>
        <nlm-citation citation-type="web">
          <source>Actualmed COVID-19 Chest X-ray Dataset Initiative</source>
          <access-date>2022-06-22</access-date>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://github.com/agchung/Actualmed-COVID-chestxray-dataset">https://github.com/agchung/Actualmed-COVID-chestxray-dataset</ext-link>
          </comment>
        </nlm-citation>
      </ref>
      <ref id="ref80">
        <label>80</label>
        <nlm-citation citation-type="web">
          <source>COVID-19 Radiography Database</source>
          <access-date>2022-06-22</access-date>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://www.kaggle.com/tawsifurrahman/covid19-radiography-database">https://www.kaggle.com/tawsifurrahman/covid19-radiography-database</ext-link>
          </comment>
        </nlm-citation>
      </ref>
      <ref id="ref81">
        <label>81</label>
        <nlm-citation citation-type="web">
          <source>Figure 1 COVID-19 Chest X-ray Dataset Initiative</source>
          <access-date>2022-06-22</access-date>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://github.com/agchung/Figure1-COVID-chestxray-dataset">https://github.com/agchung/Figure1-COVID-chestxray-dataset</ext-link>
          </comment>
        </nlm-citation>
      </ref>
      <ref id="ref82">
        <label>82</label>
        <nlm-citation citation-type="web">
          <source>Chest X-Ray Images (Pneumonia)</source>
          <access-date>2022-06-22</access-date>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia">https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia</ext-link>
          </comment>
        </nlm-citation>
      </ref>
      <ref id="ref83">
        <label>83</label>
        <nlm-citation citation-type="web">
          <source>Extensive COVID-19 X-Ray and CT Chest Images Dataset</source>
          <access-date>2022-06-22</access-date>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://data.mendeley.com/datasets/8h65ywd2jr/3">https://data.mendeley.com/datasets/8h65ywd2jr/3</ext-link>
          </comment>
        </nlm-citation>
      </ref>
      <ref id="ref84">
        <label>84</label>
        <nlm-citation citation-type="web">
          <source>COVID-19 CT Segmentation Dataset</source>
          <access-date>2022-06-22</access-date>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://medicalsegmentation.com/covid19/">https://medicalsegmentation.com/covid19/</ext-link>
          </comment>
        </nlm-citation>
      </ref>
      <ref id="ref85">
        <label>85</label>
        <nlm-citation citation-type="web">
          <source>COVID-19 Open Research Dataset Challenge (CORD-19)</source>
          <access-date>2022-06-22</access-date>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge">https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge</ext-link>
          </comment>
        </nlm-citation>
      </ref>
      <ref id="ref86">
        <label>86</label>
        <nlm-citation citation-type="web">
          <source>RSNA Pneumonia Detection Challenge: Can You Build an Algorithm That Automatically Detects Potential Pneumonia Cases?</source>
          <access-date>2022-06-22</access-date>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/data">https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/data</ext-link>
          </comment>
        </nlm-citation>
      </ref>
      <ref id="ref87">
        <label>87</label>
        <nlm-citation citation-type="web">
          <source>CI Images and Clinical Features for COVID-19</source>
          <access-date>2022-06-22</access-date>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="http://ictcf.biocuckoo.cn/HUST-19.php">http://ictcf.biocuckoo.cn/HUST-19.php</ext-link>
          </comment>
        </nlm-citation>
      </ref>
      <ref id="ref88">
        <label>88</label>
        <nlm-citation citation-type="web">
          <source>Automatic Detection of COVID-19 from Ultrasound Data</source>
          <access-date>2022-06-22</access-date>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://github.com/jannisborn/covid19_ultrasound">https://github.com/jannisborn/covid19_ultrasound</ext-link>
          </comment>
        </nlm-citation>
      </ref>
      <ref id="ref89">
        <label>89</label>
        <nlm-citation citation-type="web">
          <source>COVID-19 Chest Xray</source>
          <access-date>2022-06-22</access-date>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://www.kaggle.com/bachrr/covid-chest-xray">https://www.kaggle.com/bachrr/covid-chest-xray</ext-link>
          </comment>
        </nlm-citation>
      </ref>
      <ref id="ref90">
        <label>90</label>
        <nlm-citation citation-type="web">
          <source>Società Italiana di Radiologia Medica e Interventistica</source>
          <access-date>2022-06-22</access-date>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://sirm.org/category/senza-categoria/covid-19/">https://sirm.org/category/senza-categoria/covid-19/</ext-link>
          </comment>
        </nlm-citation>
      </ref>
      <ref id="ref91">
        <label>91</label>
        <nlm-citation citation-type="web">
          <source>John Hopkins University Coronavirus Resource Center</source>
          <access-date>2022-06-21</access-date>
          <comment>
            <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://coronavirus.jhu.edu/">https://coronavirus.jhu.edu/</ext-link>
          </comment>
        </nlm-citation>
      </ref>
    </ref-list>
  </back>
</article>
