Published on in Vol 9, No 7 (2021): July

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/27955, first published .
Automatic Extraction of Lung Cancer Staging Information From Computed Tomography Reports: Deep Learning Approach

Automatic Extraction of Lung Cancer Staging Information From Computed Tomography Reports: Deep Learning Approach

Automatic Extraction of Lung Cancer Staging Information From Computed Tomography Reports: Deep Learning Approach

Original Paper

1College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, China

2Key Laboratory for Biomedical Engineering, Ministry of Education, Hangzhou, China

3Department of Thoracic Surgery II, Peking University Cancer Hospital & Institute, Beijing, China

Corresponding Author:

Xudong Lu, PhD

College of Biomedical Engineering and Instrumental Science

Zhejiang University

38 Zheda Road

Hangzhou, 310027

China

Phone: 86 13957118891

Email: lvxd@zju.edu.cn


Background: Lung cancer is the leading cause of cancer deaths worldwide. Clinical staging of lung cancer plays a crucial role in making treatment decisions and evaluating prognosis. However, in clinical practice, approximately one-half of the clinical stages of lung cancer patients are inconsistent with their pathological stages. As one of the most important diagnostic modalities for staging, chest computed tomography (CT) provides a wealth of information about cancer staging, but the free-text nature of the CT reports obstructs their computerization.

Objective: We aimed to automatically extract the staging-related information from CT reports to support accurate clinical staging of lung cancer.

Methods: In this study, we developed an information extraction (IE) system to extract the staging-related information from CT reports. The system consisted of the following three parts: named entity recognition (NER), relation classification (RC), and postprocessing (PP). We first summarized 22 questions about lung cancer staging based on the TNM staging guideline. Next, three state-of-the-art NER algorithms were implemented to recognize the entities of interest. Next, we designed a novel RC method using the relation sign constraint (RSC) to classify the relations between entities. Finally, a rule-based PP module was established to obtain the formatted answers using the results of NER and RC.

Results: We evaluated the developed IE system on a clinical data set containing 392 chest CT reports collected from the Department of Thoracic Surgery II in the Peking University Cancer Hospital. The experimental results showed that the bidirectional encoder representation from transformers (BERT) model outperformed the iterated dilated convolutional neural networks-conditional random field (ID-CNN-CRF) and bidirectional long short-term memory networks-conditional random field (Bi-LSTM-CRF) for NER tasks with macro-F1 scores of 80.97% and 90.06% under the exact and inexact matching schemes, respectively. For the RC task, the proposed RSC showed better performance than the baseline methods. Further, the BERT-RSC model achieved the best performance with a macro-F1 score of 97.13% and a micro-F1 score of 98.37%. Moreover, the rule-based PP module could correctly obtain the formatted results using the extractions of NER and RC, achieving a macro-F1 score of 94.57% and a micro-F1 score of 96.74% for all the 22 questions.

Conclusions: We conclude that the developed IE system can effectively and accurately extract information about lung cancer staging from CT reports. Experimental results show that the extracted results have significant potential for further use in stage verification and prediction to facilitate accurate clinical staging.

JMIR Med Inform 2021;9(7):e27955

doi:10.2196/27955

Keywords



Background

Lung cancer is a group of diseases involving abnormal cell growth in the lung tissue with the potential to invade adjoining parts of the body and spread to other organs. It is the most commonly diagnosed cancer and the leading cause of cancer deaths worldwide [1], which has been a heavy burden on communities and a critical barrier to increasing life expectancy.

Clinical staging of lung cancer plays a critical role in making treatment decisions making and evaluating prognosis [2]. In current clinical practice, clinicians usually decide the clinical staging of lung cancer. Although various advanced diagnostic modalities with high sensitivity and specificity are used by clinical experts, clinical staging still disagrees with pathological staging in approximately one-half of patients, as reported in earlier studies [3,4]. Incorrect clinical staging of lung cancer may result in suboptimal treatment decisions, possibly leading to poor outcomes [3].

As an indispensable examination technique for lung cancer patients, chest computed tomography (CT) provides a large volume of valuable information about the primary tumor and lymph nodes, which is of paramount importance for clinical staging [2,5]. Besides, the reports record the inferences of radiologists about the findings from the images. Although this useful information in the form of natural language is effective and convenient for communication in medical clinical settings, its free-text nature poses difficulties when summarizing or analyzing this information for secondary purposes such as research and quality improvement. Moreover, manually extracting this information is time-consuming and expensive [6,7].

In this study, we aimed to develop an information extraction (IE) system to automatically extract valuable information from CT reports using natural language processing (NLP) techniques to support accurate clinical staging. We first summarized 22 questions about the diagnosis and staging of lung cancer based on the TNM stage guideline [8]. Subsequently, 14 types of entities and 4 types of relations were defined to represent the related information in the CT reports. Using the annotated reports, the following three state-of-the-art deep learning named entity recognition (NER) models were developed to label the entities: iterated dilated convolutional neural networks (ID-CNN) [9], bidirectional long short-term memory networks (Bi-LSTM) [10], and bidirectional encoder representation from transformers (BERT) [11]. Next, a novel relation classification (RC) approach using the relation sign constraint (RSC) was proposed to determine the relations between entities. Finally, a rule-based postprocessing (PP) module was developed to obtain the formatted results by analyzing the entities and relations extracted by NER and RC. We empirically evaluated our system using a real clinical data set. Experimental results showed that the system could extract entities and relations as well as obtain the answers to the questions correctly. Using these extracted results, we can verify the clinical staging accuracy and further develop staging prediction models to alleviate the problem of inaccurate clinical staging.

Related Works

IE refers to the task of automatically extracting structured semantics (eg, entities, relations, and events) from unstructured text. Cancer information is often extracted from free-text clinical narratives, such as operation notes, radiology, and pathology reports, using rule-based, machine learning, or hybrid methods, which have been widely investigated [12]. In terms of staging information, most studies have extracted only the clinical or pathological stage statements (eg, Stage I, Stage II, and T3N2) but not detailed phenotypes [13-20]. Besides the stage statements, Savova et al [21] and Ping et al [22] extracted some tumor-related information such as the location and size. However, these extracted phenotypes are considerably limited in their ability to support staging, particularly for lymph nodes. To support diagnosis and staging, Yim et al [23] employed a hybrid method to recognize diverse entities and relations from radiology reports for hepatocellular cancer patients, but without further elaboration on how to exploit the extracted information. Chen et al [24] extracted information from various clinical notes including operation notes and CT reports to calculate the Cancer of Liver Italian Program (CLIP) score for hepatocellular cancer patients; however, they provide limited details about the radiology corpus extraction. Bozkurt et al first developed an IE pipeline to extract various types of information from mammography reports [25] and then used the extracted features as the inputs for Bayesian networks to predict malignancy of breast cancer [26].

These rule-based and conventional machine learning methods have extracted information about cancer successfully, and some of them have exploited the extracted results to provide further diagnosis and staging decision support. Nevertheless, the development of hand-craft features and usage of external resources like the Unified Machine Language System (UMLS) and Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) are time-consuming and can even result in additional propagation errors [10,27,28]. Recently, with the rapid development of deep neural networks, advanced approaches exhibit excellent performance in many NLP tasks without tedious feature engineering [27-33]. Furthermore, some researchers began to adopt these advanced techniques to extract cancer information. Si et al [34] proposed a frame-based NLP method using Bi-LSTM-conditional random field (Bi-LSTM-CRF) to extract cancer-related information by a two-stage strategy. They first identified the keywords in the sentences to determine their frames and then employed models to label the entities in this frame. Using this strategy, they grouped the related entities by different frames. A limitation of this study is that they only evaluated each process in the pipeline using gold standard annotations separately but did not report the overall results of the pipeline. Gao et al [35] proposed a novel hierarchical attention network to predict the primary sites and histological grades of tumors in a text classification manner. Although this approach can directly provide the classification results and show the importance of each word in the text, the scope of the information extracted is considerably limited and insufficient to support cancer diagnosis and staging.

In this study, we aimed to develop an IE system using deep learning methods to extract information about lung cancer staging from CT reports to better support the accurate clinical staging of lung cancer. Our specific contributions involve (1) defining a group of entity types and relation types to cover a wealth of information about lung cancer staging in CT reports, (2) applying advanced deep learning algorithms to develop the IE system, and (3) evaluating the performance of the IE system in a pipeline manner using real clinical CT reports.


Figure 1 illustrates the development process of the IE system. First, we annotated the entities and relations in the collected CT reports as the gold standard. Next, the annotated CT reports were used to develop and evaluate the three core parts of the IE system. We also used 50 CT reports to verify the overall performance of the IE system in a pipeline manner. The details of each part are elaborated as follows.

Figure 1. Development process of the information extraction system. BERT: bidirectional encoder representation from transformers; BERT-RC: bidirectional encoder representation from transformers-relation classification; CT: computed tomography; NER: named entity recognition; RC: relation classification.
View this figure

Data Annotation

In clinical practice, clinicians usually follow the TNM staging guideline to stage the patients. Therefore, we first analyzed the eighth edition of the lung cancer TNM staging summary and parsed it into 41 questions to determine the scope of staging information (Multimedia Appendix 1). Note that the staging guideline covers three aspects of lung cancer (ie, tumor [T], lymph node [N], and metastases [M]), with detailed criteria. Chest CT can hardly provide all the information related to lung cancer staging. Clinicians also use other diagnostic modalities like positron emission tomography (PET), magnetic resonance imaging (MRI), and pathological biopsy to stage the patients. Thus, based on the content of the CT reports, 19 questions were identified under the clinician’s guidance. Moreover, we also included 3 questions about the shape, density, and enhancement extent of the tumors. These 3 questions can facilitate the diagnosis of benign and malignant tumors. All 22 questions are listed in Table 1.

Based on the questions listed in Table 1, we defined 14 types of entities and 4 types of relations to represent the staging-related information in the CT reports. Table 2 shows the defined entities. Figure 2 illustrates the entity–entity relation map.

Two medical informatics engineers were recruited to annotate the 392 CT reports by manually following the annotation guideline. The details of the annotation guideline are listed in Multimedia Appendix 2. Note that to obtain the annotation guideline, the annotators first independently annotated 10 reports and discussed the discrepancies until a consensus was reached in consultation with clinicians, resulting in a revised annotation guideline. Using the revised guideline, the annotators independently annotated 10 new reports and repeated the above process. In this manner, the guideline was refined by at least five iterations of annotation, discussion, consultation, and amendment, and then finalized. According to the final annotation guideline, we randomly selected 100 reports for annotation by both annotators to measure the interannotator agreement using the kappa statistic [36]. The remaining 292 reports were annotated only by either of the annotators. The BIO labeling scheme was employed to annotate the data. We employed brat [37] as the annotation tool. Figure 3 shows an example of the annotated CT reports.

Table 1. Questions about lung cancer diagnosis and staginga.
No.QuestionType of answerStage
1Whether the tumor can be visualized by imaging or bronchoscopy?Yes/NoTX
2What is the greatest dimension of the tumor?NumericalT1-4
3Whether the tumor invades the lobar bronchus?Yes/NoT1
4Whether the tumor invades the visceral pleura?Yes/NoT2
5Whether there is an atelectasis or obstructive pneumonitis that extends to the hilar region, either involving part of the lung or the entire lung?Yes/NoT2
6Whether there is (are) associated separate tumor nodule (s) in the same lobe as the primary?Yes/NoT3
7Whether the tumor invades the great vessels?Yes/NoT4
8Whether the tumor invades the vertebral body?Yes/NoT4
9Whether there is (are) separate tumor nodule (s) in a different ipsilateral lobe to that of the primary?Yes/NoT4
10Whether there is regional lymph node metastasis?Yes/NoN0
11Whether there is metastasis in ipsilateral hilar lymph nodes, including involvement by direct extension?Yes/NoN1
12Whether there is metastasis in ipsilateral mediastinal lymph nodes?Yes/NoN2
13Whether there is metastasis in subcarinal lymph nodes?Yes/NoN2
14Whether there is metastasis in contralateral mediastinal lymph nodes?Yes/NoN3
15Whether there is metastasis in contralateral hilar lymph nodes?Yes/NoN3
16Whether there is metastasis in supraclavicular lymph nodes?Yes/NoN3
17Whether there is (are) separate tumor nodule (s) in a contralateral lobe?Yes/NoM1a
18Whether the tumor with pleural nodules?Yes/NoM1a
19Whether there is malignant pleural or pericardial effusion?Yes/NoM1a
20bWhat is the shape of the tumor?TextNA
21bWhat is the density of the tumor?TextNA
22bWhat is the enhancement extent of the tumor?TextNA

aThe stages are based on the eighth edition of the lung cancer TNM staging summary.

bThe questions are not used for staging but are important for diagnosis of benign and malignant tumors.

Table 2. Types of entities with descriptions and instances.
Entity typeDescriptionInstance
MassSuspected mass/nodule/lesion in the lung肿物 (mass)
Lymph nodeSuspected lymph node metastasis肿大淋巴结 (enlarged lymph node)
LocationLocation of mass or lymph node左上肺右基底段 (right basal segment of the upper left lung)
SizeSize of mass or lymph node25×22 cm
NegationNegative words未见 (unseen)
DensityDensity of mass磨玻璃密度 (ground glass density)
EnhancementEnhancement extent of mass强化明显 (significant enhancement)
ShapeShape of mass边缘见毛刺 (spiculate boundary)
BronchusDescription of bronchial invasion支气管狭窄 (bronchial stenosis)
PleuraDescription of pleural invasion or metastasis胸膜凹陷 (pleural indentation)
VesselDescription of great vessel invasion包绕左肺动脉 (surrounds the right lower pulmonary artery)
Vertebral bodyDescription of vertebral body invasion椎体见骨质破坏 (bone destruction seen in the vertebral body)
EffusionDescription of pleural or pericardial effusion心包积液 (pericardial effusion)
PAOPaDescription of pulmonary atelectasis or obstructive pneumonitis肺组织不张 (atelectasis)

aPAOP: pulmonary atelectasis/obstructive pneumonitis.

Figure 2. Entity–entity relation map for extracting lung cancer staging information. PAOP: pulmonary atelectasis/obstructive pneumonitis.
View this figure
Figure 3. Annotated computed tomography report based on the annotation guideline.
View this figure

Word Embedding

As an unsupervised feature representation technique, word embedding maps the words to vectors of real values to capture the semantic and syntactic information from the corpus. In this study, we adopted the word embedding technique pretrained on the Chinese Wikipedia corpus using word2vec [38] for conventional CNN and recurrent neural network (RNN) models. Note that unlike English, Chinese words can be composed of multiple characters but with no space appearing between words. To incorporate the word segmentation information into the NER task, we first used jieba [39], a well-known Chinese text segmentation toolkit, to segment the sentence. Then, we used the randomly initialized real-value vectors to represent whether a character is the first, middle, or last character of the segmented word as in the segmentation embedding. For BERT, we used the default vocabulary to map the tokens to natural numbers.

NER Process

NER is an essential technique to identify the types and boundaries of the entities of interest, which can drive other NLP tasks [40-43]. Recently developed deep learning NER methods exhibit more powerful performances than the traditional methods without tedious feature engineering [27,29,30,44]. In this study, we selected ID-CNN-CRF, Bi-LSTM-CRF, and BERT to recognize the entities.

ID-CNN is an advanced algorithm extending from the dilated CNN [45]. Instead of simply increasing the depth of a stacked dilated CNN, the ID-CNN applies the same small stack of dilated convolutions multiple times, with each iteration taking the result of the last application as the input to incorporate global information from a whole sentence and alleviate the overfitting problem. Bi-LSTM is another deep learning method using the recurrent neural network architecture that can capture the long-distance dependencies of context from both sides of the sequence and alleviate gradient vanishing or explosion during entity recognition from clinical text. A CRF layer was also employed on the ID-CNN and Bi-LSTM models, as it can exploit the relation constraints among different labels to find the optimal label path for sequence labeling tasks.

BERT is a novel language representation model pretrained on a large corpus using bidirectional transformers [46]. Unlike the traditional embedding methods that can only represent a word with polysemy using one fixed vector, BERT can dynamically adjust the representation depending on the context of the word. It can also be easily fine-tuned to adapt to specific tasks, such as NER, RC, and question answering, and it has shown more powerful performance than conventional CNN and RNN models.

RC Process

RC is the task of finding semantic relations between pairs of entities, which can group the relevant entities together to generate richer semantics [42,43]. Although traditional RC methods have achieved satisfactory performance [47,48], deep learning RC methods obtained better results and provided an effective way to alleviate the problem of hand-craft features [10,27,28]. In this study, we selected attention-based bidirectional long short-term memory networks (Attention-Bi-LSTM) [32] and BERT to classify the relations between entities.

Note that in this study, two entities in a sentence can only have one type of relation or no relation depending on the definition in Figure 2. For instance, the relation between a lymph node entity and a location entity may be At or NoRelation, but definitely not a SizeOf relation. This information is useful for simplifying the multiclassification problem into a binary classification problem. We propose a novel approach, namely RSC, to use this extra information for RC. Before using the original sentence for relation classification, we first added the tags, namely At, SizeOf, Negate, Related, and NoRelation at the beginning of the sentence (eg, “At<e1>左肺门及纵隔4、5组</e1>见<e2>肿大淋巴结</e2>,较大约14×12m.”). The added At tag is determined based on the two target entity types (location and lymph node). Then, the sentence with the tag can be input into the RC model. Using this method, we can simply incorporate the entity–entity relation constraints into the model to improve the prediction performance.

PP Step

To obtain the answers to the questions listed in Table 1, it is not enough to directly use the extracted triples (entity 1–relation–entity 2), and further analysis is needed. For example, to answer the question on whether there is metastasis in ipsilateral mediastinal lymph nodes, we first need to know whether there exist a primary tumor and a mediastinal lymph node metastasis for this patient, and then determine the relative position of these two. In this study, we developed a rule-based PP module to process the extracted triples by the NER and RC models. The PP step is presented in Figure 4 and the rules are listed in Multimedia Appendix 3.

Figure 4. Postprocessing steps.
View this figure

Evaluation Metrics

To evaluate the performance of the models, we used the precision, recall, and F1 score as the evaluation metrics. Moreover, we also employed the microaverages and macroaverages for overall performance evaluation. The corresponding formulations are listed in Multimedia Appendix 4.


Data Annotation Results

A total of 392 chest CT reports of lung cancer patients were collected from the Department of Thoracic Surgery II in the Peking University Cancer Hospital. Two medical informatics engineers were recruited to annotate the entities and relations based on the annotation guideline. The statistics of the annotations are summarized in Tables 3 and 4. We had both the engineers annotate 100 CT reports to calculate the interannotator agreement, and the κ values were 0.937 for the entity annotation and 0.946 for the relation annotation, indicating the reliability of the annotation. Prior approval was obtained from the Ethics Committee of the Peking University Cancer Hospital to conduct this study.

Table 3. Statistics of annotated named entities.
EntityAnnotated entities, n
Mass767
Lymph node492
Location1748
Size699
Negation808
Density147
Enhancement146
Shape437
Bronchus124
Pleura262
Vessel41
Vertebral body25
Effusion363
PAOP78
Table 4. Statistics of annotated relations.
RelationAnnotated relations, n
At1811
SizeOf683
Related988
Negate803

NER Results

To train and evaluate the NER models, we randomly separated 70% of the CT reports as the training set, 10% as the validation set, and 20% as the test set. The early stopping strategy was used on the validation set to avoid the overfitting problem. The hyperparameters used in this study are listed in Multimedia Appendix 5. We repeated the entire training and evaluation process five times to reduce the possible bias that may be caused by data partitioning.

Table 5 and Figure 5 show the results of the NER models. As shown in Table 5, the BERT model achieves the best overall performance with a macro-F1 score of 80.97% and a micro-F1 score of 88.5%. We can notice that the entities with several annotations or plain descriptions (eg, “Lymph Node,” “Negation,” “Size,” and “Effusion”) obtain satisfactory results with F1 scores greater than 90%. However, performances degraded for the entities with a small number of annotations or diverse descriptions (eg, “Shape,” “Pleura,” “Vessel,” “Vertebral Body,” and “PAOP”) Figure 5 shows the results in a more intuitive manner with standard deviations.

By further analyzing the extractions, we found that most of the errors were due to an inexact match, where a predicted entity overlapped with the gold standard. For example, the predicted entity “余 (B-Location)肺 (I-Location)内 (O)” is an inexact match for the gold standard annotation “余 (B-Location)肺 (I-Location)内 (I-Location).” Although these extractions could not cover the gold standard exactly, the partially matched entities still contained useful information for RC and PP. We also calculated the inexact matching performances for each type of entity and have presented them in Table 6 and Figure 6.

As shown in Table 6, the macro-F1 scores of ID-CNN-CRF, Bi-LSTM-CRF, and BERT using the inexact metrics are 89.6%, 89.96%, and 90.06%, which obtain improvements of 13.93%, 12.69%, and 9.09% compared with the exact metrics, respectively. Furthermore, the micro-F1 scores of the inexact metrics are all above 94%. Almost all the entities obtain better extraction results under the inexact matching scheme, especially those entities with diverse descriptions, which indicates that the extractions cover most of the annotations.

Table 5. Performance of the named entity recognition models.
EntityID-CNN-CRFaBi-LSTM-CRFbBERTc

Precision (%)Recall (%)F1 score (%)Precision (%)Recall (%)F1 score (%)Precision (%)Recall (%)F1 score (%)
Mass83.1187.7985.3583.8688.0285.8887.9286.0587.61
Lymph node92.2995.4293.8393.2994.7994.0491.5293.0792.27
Location84.8587.4086.186.9989.388.1287.9386.9987.44
Size91.69593.2492.2994.9293.5694.0394.4494.22
Negation97.6698.4598.0297.7798.7998.2699.1299.1199.11
Density64.1669.6666.6168.471.4769.7375.5568.4971.75
Enhancement74.4881.0477.4774.3378.476.1481.3975.0377.69
Shape82.6583.8583.2178.9583.388182.7281.882.2
Bronchus66.4567.9667.1162.5769.5565.6674.1776.8875.1
Pleura81.4879.3980.3683.5483.2883.3984.5977.1380.21
Vessel37.5241.5939.0544.543.1343.2768.0954.5358.51
Vertebral body36.4360.1742.7546.5267.553.178266.6772.24
Effusion97.0297.2597.1195.7797.396.5198.3297.2597.78
PAOPd47.6751.3349.1150.2857.2553.0465.8653.257.46
Macroaverage74.178.3175.6775.6579.7977.2783.879.3380.97
Microaverage85.8588.4187.1186.5689.3287.9289.2887.7888.5

aID-CNN-CRF: iterated dilated convolutional neural networks-conditional random field.

bBi-LSTM-CRF: bidirectional long short-term memory networks- conditional random field.

cBERT: bidirectional encoder representation from transformers.

dPAOP: pulmonary atelectasis/obstructive pneumonitis.

Figure 5. F1 scores with bars showing the standard deviations of the named entity recognition models. Bi-LSTM-CRF: bidirectional long short-term memory networks-conditional random field; BERT: bidirectional encoder representation from transformers; ID-CNN-CRF: iterated dilated convolutional neural networks-conditional random field; PAOP: pulmonary atelectasis/obstructive pneumonitis.
View this figure
Table 6. Performance of the named entity recognition models calculated using the inexact matching scheme.
EntityID-CNN-CRFaBi-LSTM-CRFbBERTc

Precision (%)Recall (%)F1 score (%)Precision (%)Recall (%)F1 score (%)Precision (%)Recall (%)F1 score (%)
Mass89.7894.8192.1990.7195.292.8994.1192.0593.02
Lymph node97.11100.4298.7397.439998.297.899.4198.59
Location91.8894.6693.2492.7395.293.9595.0193.9994.47
Size95.3398.997.0696.2899.0697.6296.5897.0396.79
Negation97.6698.4598.0297.7798.7998.2699.1299.1199.11
Density84.0990.5386.9582.3986.1384.0194.4885.4989.64
Enhancement85.6493.2689.1186.7992.389.2591.5385.0387.73
Shape91.5992.9292.2288.7693.9491.1692.2391.1291.6
Bronchus83.2085.0384.0179.489.1983.7384.7687.7685.75
Pleura93.0790.4491.6792.8692.5492.6793.1285.788.73
Vessel81.2979.1879.0984.6673.8277.5289.0367.575.58
Vertebral body63.8192.571.7665.7686.6772.289273.3380.24
Effusion98.498.6498.598.1899.7498.9310098.9199.45
PAOPd80.184.6481.8484.2396.3989.0390.2374.9980.13
Macroaverage88.0792.4689.688.4392.7189.9693.5787.9690.06
Microaverage92.6695.4294.0192.8795.8494.3295.3993.8194.57

aID-CNN-CRF: iterated dilated convolutional neural networks-conditional random field.

bBi-LSTM-CRF: bidirectional long short-term memory networks- conditional random field.

cBERT: bidirectional encoder representation from transformers.

dPAOP: pulmonary atelectasis/obstructive pneumonitis.

Figure 6. Inexactly matching F1 scores with bar showing the standard deviations of the named entity recognition models. Bi-LSTM-CRF: bidirectional long short-term memory networks-conditional random field; BERT: bidirectional encoder representation from transformers; ID-CNN-CRF: iterated dilated convolutional neural networks-conditional random field; PAOP: pulmonary atelectasis/obstructive pneumonitis.
View this figure

RC Results

To evaluate the proposed RC method, the data set was randomly separated such that 70%, 10%, and 20% of the CT reports were used as the training, validation, and test sets, respectively. Attention-Bi-LSTM and BERT were selected as the baselines. The annotated entities were provided in this step for evaluating the performance of the RC models. The hyperparameters of the RC models are listed in Multimedia Appendix 5. We also repeated the entire training and evaluation process five times with different random seeds to alleviate the possible bias caused by data partitioning.

Table 7 and Figure 7 show the experimental results of the RC models. As depicted in Table 7, all the four models achieve excellent performances with macro-F1 values above 95% and micro-F1 values above 97%. Comparing the baseline and proposed methods indicates that the RSC improves the performances of both the baseline models, especially for the Related RC. Moreover, the BERT-RSC achieves the best performance among all the models.

Table 7. Performance of the proposed and baseline relation classification models.
RelationBaselineProposed

Attention-Bi-LSTMaBERTbAttention-Bi-LSTM-RSCcBERT-RSCd

Precision (%)Recall (%)F1 score (%)Precision (%)Recall (%)F1 score (%)Precision (%)Recall (%)F1 score (%)Precision (%)Recall (%)F1 score (%)
At96.0294.4795.2396.395.3995.8396.2594.7995.596.9595.5596.23
SizeOf97.0597.5197.2798.1397.3597.7397.1998.197.6198.1198.4298.25
Related88.1791.4789.6585.2294.789.6488.9592.3190.5589.1796.2792.56
Negate98.797.0797.8799.3899.6399.599.3397.8298.5699.3899.7499.56
NoRelation98.798.6798.6899.1198.5798.8498.8398.7798.899.2298.8799.05
Macroaverage95.7395.8495.7495.6397.1396.3196.1196.3696.296.5797.7797.13
Microaverage97.8897.8897.8898.1198.1198.1198.0898.0898.0898.4898.4898.37

aID-CNN-CRF: iterated dilated convolutional neural networks-conditional random field.

bBi-LSTM-CRF: bidirectional long short-term memory networks- conditional random field.

cBERT: bidirectional encoder representation from transformers.

dBERT-RSC: bidirectional encoder representation from transformers-relation sign constraint.

Figure 7. F1 scores with bars showing the standard deviations of the relation classification models. Attention-Bi-LSTM: attention-based bidirectional long short-term memory networks; Attention-Bi-LSTM-RSC: attention-based bidirectional long short-term memory-relation sign constraint; BERT: bidirectional encoder representation from transformers; BERT-RSC: bidirectional encoder representation from transformers-relation sign constraint.
View this figure

PP Results

Based on the experimental results presented above, we selected the BERT model for NER and RC. Note that instead of using the annotated data, we directly used the output of the NER model as the input for RC and employed the PP module to analyze the triples extracted by NER and RC to verify the performance of the IE system. We randomly selected 50 reports, for which both the annotators manually answered the 22 questions. Table 8 shows the number of positive answers annotated to each question in the 50 reports and the experimental results of the IE system for each question. The experimental results prove that the IE system achieves a macro-F1 score of 94.57% and a micro-F1 score of 96.74%, indicating that the system can effectively extract information related to lung cancer staging from CT reports.

By analyzing the incorrect answers, we found that the main reason for inaccurate extraction was that some entities or relations were not recognized by the system. For example, missing “Mass” or “At” relations made it impossible to determine the relative position between the primary tumor and other nodules, resulting in low recall values of Q6, Q9, and Q18. Besides, missing “Bronchus,” “PAOP,” “Vessel,” “Density,” and “Enhancement” entities led to low recall values of Q3, Q5, Q7, Q21, and Q22 when relevant descriptions were inherently scarce.

Table 8. Experimental results of the developed information extraction system.
No.Number of positive answers annotatedPrecision (%)Recall (%)F1 score (%)
150100100100
24797.8395.7496.77
31610087.5093.33
42710096.398.11
5683.3383.3383.33
61710082.3590.32
751008088.89
82100100100
91410085.7192.31
1028100100100
1118100100100
122295.4595.4595.45
131100100100
14199510097.44
156100100100
165100100100
1720959595
185808080
192100100100
202896.392.8694.55
211410085.7192.31
221685.718082.76
Macroaverage
96.769294.57
Microaverage
97.4995.9996.74

Principal Findings

In this study, we developed an IE system to extract information related to lung cancer staging from CT reports automatically. The experimental results indicate that the IE system can effectively extract the useful entities and relations using the NER and RC models and accurately obtain the answers to the questions about lung cancer staging using the PP module. The extracted information shows significant potential to support further research about accurate lung cancer clinical staging.

Although the macro-F1 score of NER is only 80.97%, which seems insufficient to support RC and PP, the IE system still achieves satisfactory results. The main reason is that the PP module exploits the key characters in the extracted entities or only the presence of the entities to obtain the answers but does not need the complete entities. For example, the annotation of the sentence “右肺下叶基底段见软组织密度肿块” is [Location_B, Location_I, Location_I, Location_I, Location_I, Location_I, Location_I, O, Mass_B, Mass_I, Mass_I, Mass_I, Mass_I, Mass_I, Mass_I], but the NER result is [Location_B, Location_I, Location_I, Location_I, Location_I, O, Location_I, O, Mass_B, Mass_I, Mass_I, Mass_I, Mass_I, Mass_I, Mass_I], which means the Location entity extracted is merely “右肺下叶基.” However, this partial Location entity is correctly linked to the Mass entity “软组织密度肿块” with an “At” relation by the RC model, and the key characters “右” and “下” in the Location entity can support the following PP step. The high macro-F1 and micro-F1 of the inexact matching scheme indicate that most of the entities can be extracted completely or partially by the NER model. Furthermore, the extractions cover most of the key characters needed during the PP step.

For the RC task, all the four models achieve satisfactory performances. This is because the descriptions are similar in many sentences so that the models can easily learn these patterns. However, for the Related relation, none of the models obtain the perfect performance. The main reason is that some types of entities like “Vertebral Body” and “Vessel” are rare and have diverse descriptions, making it difficult for these models to learn the corresponding patterns. The addition of RSC may make the descriptions more uniform so that the models may learn the patterns more easily.

For the NER and RC tasks, the advanced pretrained BERT model achieves better performance compared to the conventional CNN and RNN methods, thus verifying the superiority of large language representation models for various NLP tasks.

Limitations

Although the rule-based PP module can accurately obtain the answers to the defined questions by analyzing the extracted entities and relations, these hard-coded rules are difficult to maintain and update. Furthermore, for better use of clinical knowledge (eg, enlarged lymph nodes with a minimum diameter greater than 10 mm are often considered metastatic), we need to establish a more comprehensive knowledge base to analyze the extracted information. Ontology, as a formal representation of medical knowledge, has become the standard method to develop knowledge bases [49,50]. In future, we can use the Web Ontology Language (OWL) [51] to construct the knowledge graph and employ the Semantic Web Rule Language (SWRL) [52] to develop the reasoning rules for lung cancer staging.

In this study, we explored the feasibility of extracting information related to lung cancer staging from CT reports using an NER+RC+PP pipeline in a single hospital. When generalizing this approach to other hospitals, the entity and relation definitions as well as the annotation strategy can be important references for the same application, and the developed pipeline can also be reused. However, if researchers want to customize the entity types or relation types to suit their purpose or if the writing style of CT reports is significantly different from that in the reports that we used, fine-tuning of BERT using the newly annotated reports may be a possible way to obtain satisfactory generalization.

Future Research

In the current study, pathological staging was not applied as the gold standard to evaluate the correctness of the extracted results. This is mainly because in clinical practice, clinicians use not only CT but also PET, MRI, and other diagnostic modalities to stage patients. Therefore, it is insufficient to use only the information extracted from the CT report to stage the patients. In future, we plan to extract staging information from other examination reports and use this multisource information to verify the staging correctness from a more comprehensive perspective. Moreover, by combining various details such as laboratory tests, disease history, and radiomics data, we can employ advanced machine learning algorithms to develop clinical staging prediction models to further alleviate the large number of disagreements between clinical and pathological stages.

Conclusions

In this study, we developed an IE system to extract lung cancer staging information from CT reports automatically using NLP techniques. Experimental results obtained using real clinical data demonstrated that the IE system could effectively extract the relevant entities and relations using the NER and RC models. It could also accurately answer the staging questions using the rule-based PP module, thus proving the potential of this system for lung cancer staging verification and clinical staging prediction.

Acknowledgments

This work was supported by the National Key R&D Program of China (grant 2018YFC0910700).

Authors' Contributions

DH, SL, XL, and NW conceptualized the study. SL acquired the clinical data. SL, YW, and HZ annotated the data. HZ, DH, and YW designed and implemented the algorithms and conducted the experiments. DH, HZ, YW, and SL analyzed the experimental results. DH wrote the manuscript with revision assistance from SL, XL, and NW. All authors have read and approved the manuscript.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Parsed questions about lung cancer staging.

PDF File (Adobe PDF File), 123 KB

Multimedia Appendix 2

Annotation guideline.

PDF File (Adobe PDF File), 167 KB

Multimedia Appendix 3

Postprocessing rules.

PDF File (Adobe PDF File), 101 KB

Multimedia Appendix 4

Evaluation metrics.

PDF File (Adobe PDF File), 52 KB

Multimedia Appendix 5

Hyperparameters of the named entity recognition and relation classification models. Attention-Bi-LSTM-RSC: attention-based bidirectional long short-term memory-relation sign constraint; Bi-LSTM-CRF: bidirectional long short-term memory networks-conditional random field; BERT: bidirectional encoder representation from transformers; BERT-RSC: bidirectional encoder representation from transformers-relation sign constraint; ID-CNN-CRF: iterated dilated convolutional neural networks-conditional random field.

PDF File (Adobe PDF File), 115 KB

References

  1. Bray F, Ferlay J, Soerjomataram I, Siegel RL, Torre LA, Jemal A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin 2018 Nov;68(6):394-424 [FREE Full text] [CrossRef] [Medline]
  2. Ettinger D, Wood D, Aggarwal C, Aisner D, Akerley W, Bauman J, et al. NCCN clinical practice guidelines in oncology. Non-Small Cell Lung Cancer.   URL: https://www.nccn.org/guidelines/guidelines-detail?category=1&id=1450 [accessed 2019-09-16]
  3. Navani N, Fisher DJ, Tierney JF, Stephens RJ, Burdett S, NSCLC Meta-analysis Collaborative Group. The accuracy of clinical staging of stage I-IIIa non-small cell lung cancer: an analysis based on individual participant data. Chest 2019 Mar;155(3):502-509 [FREE Full text] [CrossRef] [Medline]
  4. Heineman DJ, Ten Berge MG, Daniels JM, Versteegh MI, Marang-van de Mheen PJ, Wouters MW, et al. The quality of staging non-small cell lung cancer in the Netherlands: data from the Dutch lung surgery audit. Ann Thorac Surg 2016 Nov;102(5):1622-1629. [CrossRef] [Medline]
  5. Wood D, Kazerooni E, Baum S, Eapen G, Ettinger D, Ferguson J, et al. NCCN clinical practice guidelines in oncology. Lung Cancer Screening.   URL: https://www.nccn.org/guidelines/guidelines-detail?category=2&id=1441 [accessed 2019-09-16]
  6. Yim W, Yetisgen M, Harris WP, Kwan SW. Natural language processing in oncology: a review. JAMA Oncol 2016 Jun 01;2(6):797-804. [CrossRef] [Medline]
  7. Sheikhalishahi S, Miotto R, Dudley JT, Lavelli A, Rinaldi F, Osmani V. Natural language processing of clinical notes on chronic diseases: systematic review. JMIR Med Inform 2019 Apr 27;7(2):e12239 [FREE Full text] [CrossRef] [Medline]
  8. Detterbeck FC, Boffa DJ, Kim AW, Tanoue LT. The eighth edition lung cancer stage classification. Chest 2017 Jan;151(1):193-203. [CrossRef] [Medline]
  9. Strubell E, Verga P, Belanger D, McCallum A. Fast and accurate entity recognition with iterated dilated convolutions. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing.: Association for Computational Linguistics; 2017 Presented at: Conference on Empirical Methods in Natural Language Processing; Sept 07-11; Copenhagen, Denmark p. 2670-2680. [CrossRef]
  10. Huang Z, Xu W, Yu K. Bidirectional LSTM-CRF models for sequence tagging. ArXiv. Preprint posted online on Aug 9, 2015 [FREE Full text]
  11. Devlin J, Chang M, Lee K, Toutanova K. BERT: pre-training of deep bidirectional transformers for language understanding. ArXiv. Preprint posted online on Oct 11, 2018.
  12. Datta S, Bernstam EV, Roberts K. A frame semantic overview of NLP-based information extraction for cancer-related EHR notes. J Biomed Inform 2019 Oct 04;100:103301 [FREE Full text] [CrossRef] [Medline]
  13. Nguyen AN, Lawley MJ, Hansen DP, Bowman RV, Clarke BE, Duhig EE, et al. Symbolic rule-based classification of lung cancer stages from free-text pathology reports. J Am Med Inform Assoc 2010 Jul;17(4):440-445 [FREE Full text] [CrossRef] [Medline]
  14. Warner JL, Levy MA, Neuss MN, Warner JL, Levy MA, Neuss MN. ReCAP: feasibility and accuracy of extracting cancer stage information from narrative electronic health record data. J Oncol Pract 2016 Feb;12(2):157-158 [FREE Full text] [CrossRef] [Medline]
  15. Schroeck FR, Lynch KE, Chang JW, MacKenzie TA, Seigne JD, Robertson DJ, et al. Extent of risk-aligned surveillance for cancer recurrence among patients with early-stage bladder cancer. JAMA Netw Open 2018 Sep;1(5):e183442 [FREE Full text] [CrossRef] [Medline]
  16. Cary C, Roberts A, Church AK, Eckert G, Ouyang F, He J, et al. Development of a novel algorithm to identify staging and lines of therapy for bladder cancer. J Clin Oncol 2017 May 20;35(15_suppl):e18235 [FREE Full text] [CrossRef]
  17. Schroeck F, Lynch K, Chang JW, Robertson D, Seigne J, Goodney P, et al. MP44-01 a national study of risk-aligned surveillance practice for non-muscle invasive bladder cancer. J Urol 2018 Apr;199(4S):e587. [CrossRef]
  18. AAlAbdulsalam AK, Garvin JH, Redd A, Carter ME, Sweeny C, Meystre SM. Automated extraction and classification of cancer stage mentions from unstructured text fields in a central cancer registry. AMIA Jt Summits Transl Sci Proc 2018 May;2017:16-25 [FREE Full text] [Medline]
  19. Nunes A, Green E, Dalvi T, Lewis J, Jones N, Seeger J. Abstract P5-08-20: a real-world evidence study to define the prevalence of endocrine therapy-naïve hormone receptor-positive locally advanced or metastatic breast cancer in the US. Cancer Res 2017 Feb;77(4 Supplement):P5-08-20 [FREE Full text] [CrossRef]
  20. Giri A, Levinson R, Keene S, Holman G, Smith S, Clayton L, et al. Abstract 4229: preliminary results from the pharmacogenetics ovarian cancer knowledge to individualize treatment (POCKIT) study. Cancer Res 2018 Jul;78(13):4229 [FREE Full text] [CrossRef]
  21. Savova GK, Tseytlin E, Finan S, Castine M, Miller T, Medvedeva O, et al. DeepPhe: a natural language processing system for extracting cancer phenotypes from clinical records. Cancer Res 2017 Nov 01;77(21):e115-e118 [FREE Full text] [CrossRef] [Medline]
  22. Ping X, Tseng Y, Chung Y, Wu Y, Hsu C, Yang P, et al. Information extraction for tracking liver cancer patients' statuses: from mixture of clinical narrative report types. Telemed J E Health 2013 Sep;19(9):704-710. [CrossRef] [Medline]
  23. Yim W, Denman T, Kwan SW, Yetisgen M. Tumor information extraction in radiology reports for hepatocellular carcinoma patients. AMIA Jt Summits Transl Sci Proc 2016 Jul;2016:455-464 [FREE Full text] [Medline]
  24. Chen L, Song L, Shao Y, Li D, Ding K. Using natural language processing to extract clinically useful information from Chinese electronic medical records. Int J Med Inform 2019 Apr;124:6-12. [CrossRef] [Medline]
  25. Bozkurt S, Lipson JA, Senol U, Rubin DL. Automatic abstraction of imaging observations with their characteristics from mammography reports. J Am Med Inform Assoc 2015 Apr;22(e1):e81-e92. [CrossRef] [Medline]
  26. Bozkurt S, Gimenez F, Burnside ES, Gulkesen KH, Rubin DL. Using automatically extracted information from mammography reports for decision-support. J Biomed Inform 2016 Aug;62:224-231 [FREE Full text] [CrossRef] [Medline]
  27. Jauregi Unanue I, Zare Borzeshi E, Piccardi M. Recurrent neural networks with specialized word embeddings for health-domain named-entity recognition. J Biomed Inform 2017 Dec;76:102-109 [FREE Full text] [CrossRef] [Medline]
  28. Zhang D, Wang D. Relation classification via recurrent neural network. ArXiv. Preprint posted online on Aug 5, 2015 [FREE Full text]
  29. Zhang Y, Wang X, Hou Z, Li J. Clinical named entity recognition from Chinese electronic health records via machine learning methods. JMIR Med Inform 2018 Dec 17;6(4):e50 [FREE Full text] [CrossRef] [Medline]
  30. Liu Z, Yang M, Wang X, Chen Q, Tang B, Wang Z, et al. Entity recognition from clinical texts via recurrent neural network. BMC Med Inform Decis Mak 2017 Jul 05;17(Suppl 2):67 [FREE Full text] [CrossRef] [Medline]
  31. Zeng D, Liu K, Lai S, Zhou G, Zhao J. Relation classification via convolutional deep neural network. Dublin, Ireland: Dublin City University and Association for Computational Linguistics; 2014 Presented at: The 25th International Conference on Computational Linguistics; August 23-29, 2014; Dublin, Ireland p. 2335-2344.
  32. Zhou P, Shi W, Tian J, Qi Z, Li B, Hao H, et al. Attention-based bidirectional long short-term memory networks for relation classification. Berlin, Germany: Association for Computational Linguistics; 2016 Presented at: The 54th Annual Meeting of the Association for Computational Linguistics; August 7-12, 2016; Berlin, Germany p. A   URL: https://aclanthology.org/P16-2034/ [CrossRef]
  33. Luo Y. Recurrent neural networks for classifying relations in clinical notes. J Biomed Inform 2017 Aug;72:85-95 [FREE Full text] [CrossRef] [Medline]
  34. Si Y, Roberts K. A frame-based NLP system for cancer-related information extraction. AMIA Annu Symp Proc 2018;2018:1524-1533 [FREE Full text] [Medline]
  35. Gao S, Young MT, Qiu JX, Yoon H, Christian JB, Fearn PA, et al. Hierarchical attention networks for information extraction from cancer pathology reports. J Am Med Inform Assoc 2018 Mar 01;25(3):321-330 [FREE Full text] [CrossRef] [Medline]
  36. Hripcsak G, Rothschild AS. Agreement, the f-measure, and reliability in information retrieval. J Am Med Inform Assoc 2005 May;12(3):296-298 [FREE Full text] [CrossRef] [Medline]
  37. Stenetorp P, Pyysalo S, Topić G, Ohta T, Ananiadou S, Tsujii J. brat: a web-based tool for NLP-assisted text annotation. In: Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics.: Association for Computational Linguistics; 2012 Presented at: The 13th Conference of the European Chapter of the Association for Computational Linguistics; April 23-27, 2012; Avignon, France p. 102-107.
  38. Mikolov T, Chen K, Corrado G, Dean J. Efficient estimation of word representations in vector space. ArXiv. Preprint posted online on Jan 16, 2013 [FREE Full text]
  39. Sun J. jieba. Jieba Chinese word segmentation module.   URL: https://github.com/fxsjy/jieba [accessed 2021-07-03]
  40. Lei J, Tang B, Lu X, Gao K, Jiang M, Xu H. A comprehensive study of named entity recognition in Chinese clinical text. J Am Med Inform Assoc 2014 Sep;21(5):808-814 [FREE Full text] [CrossRef] [Medline]
  41. Wang H, Zhang W, Zeng Q, Li Z, Feng K, Liu L. Extracting important information from Chinese Operation Notes with natural language processing methods. J Biomed Inform 2014 Apr;48:130-136 [FREE Full text] [CrossRef] [Medline]
  42. Wang Y, Wang L, Rastegar-Mojarad M, Moon S, Shen F, Afzal N, et al. Clinical information extraction applications: a literature review. J Biomed Inform 2018 Jan;77:34-49 [FREE Full text] [CrossRef] [Medline]
  43. Kang T, Zhang S, Tang Y, Hruby GW, Rusanov A, Elhadad N, et al. EliIE: an open-source information extraction system for clinical trial eligibility criteria. J Am Med Inform Assoc 2017 Nov 01;24(6):1062-1071 [FREE Full text] [CrossRef] [Medline]
  44. Xu H, Stenner SP, Doan S, Johnson KB, Waitman LR, Denny JC. MedEx: a medication information extraction system for clinical narratives. J Am Med Inform Assoc 2010 Jan;17(1):19-24 [FREE Full text] [CrossRef] [Medline]
  45. Yu F, Koltun V. Multi-scale context aggregation by dilated convolutions. arXiv e-prints 2015 Nov 23:07122 [FREE Full text]
  46. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A, et al. Attention is all you need. ArXiv. Preprint posted online on June 12, 2017.
  47. GuoDong Z, Jian S, Jie Z, Min Z. Exploring various knowledge in relation extraction. In: Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics.: Association for Computational Linguistics; 2005 Jun Presented at: The 43rd Annual Meeting of the Association for Computational Linguistics; June 25-30, 2005; Ann Arbor, Michigan p. 427-434. [CrossRef]
  48. Mintz M, Bills S, Snow R, Jurafsky D. Distant supervision for relation extraction without labeled data. In: Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP.: Association for Computational Linguistics; 2009 Aug Presented at: The Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP; Aug 7-12, 2009; Suntec, Singapore p. 1003-1011.
  49. Chen R, Huang Y, Bau C, Chen S. A recommendation system based on domain ontology and SWRL for anti-diabetic drugs selection. Expert Syst Appl 2012 Mar;39(4):3995-4006 [FREE Full text] [CrossRef]
  50. Zhang Y, Gou L, Tian Y, Li T, Zhang M, Li J. Design and development of a sharable clinical decision support system based on a semantic web service framework. J Med Syst 2016 May;40(5):118. [CrossRef] [Medline]
  51. OWL 2 Web Ontology Language Document Overview (Second Edition).   URL: https://www.w3.org/TR/owl2-overview/ [accessed 2021-07-03]
  52. SWRL: A Semantic Web Rule Language Combining OWL and RuleML.   URL: https://www.w3.org/Submission/SWRL/ [accessed 2021-07-03]


Attention-Bi-LSTM: attention-based bidirectional long short-term memory networks
Bi-LSTM: bidirectional long short-term memory networks
Bi-LSTM-CRF: bidirectional long short-term memory networks- conditional random field
BERT: bidirectional encoder representation from transformers
CLIP: Cancer of Liver Italian Program
CRF: conditional random field
CT: computed tomography
ID-CNN: iterated dilated convolutional neural networks
ID-CNN-CRF: iterated dilated convolutional neural networks-conditional random field
IE: information extraction
MRI: magnetic resonance imaging
NER: named entity recognition
NLP: natural language processing
OWL: Web Ontology Language
PAOP: pulmonary atelectasis/obstructive pneumonitis
PET: positron emission tomography
PP: postprocessing
RC: relation classification
RNN: recurrent neural network
RSC: relation sign constraint
SNOMED CT: Systematized Nomenclature of Medicine Clinical Terms
SWRL: Semantic Web Rule Language
UMLS: Unified Machine Language System


Edited by T Hao, Z Huang, B Tang; submitted 15.02.21; peer-reviewed by Z Su, H Chen, K Roberts; comments to author 29.04.21; revised version received 27.05.21; accepted 07.06.21; published 21.07.21

Copyright

©Danqing Hu, Huanyao Zhang, Shaolei Li, Yuhong Wang, Nan Wu, Xudong Lu. Originally published in JMIR Medical Informatics (https://medinform.jmir.org), 21.07.2021.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on https://medinform.jmir.org/, as well as this copyright and license information must be included.