Published on in Vol 9, No 4 (2021): April

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/19055, first published .
Medical Data Feature Learning Based on Probability and Depth Learning Mining: Model Development and Validation

Medical Data Feature Learning Based on Probability and Depth Learning Mining: Model Development and Validation

Medical Data Feature Learning Based on Probability and Depth Learning Mining: Model Development and Validation

Authors of this article:

Yuanlin Yang1, 2 Author Orcid Image ;   Dehua Li2, 3 Author Orcid Image

Original Paper

1Department of Logistics Management, West China Second University Hospital, Sichuan University, Chengdu, China

2Key Laboratory of Obstetric and Gynecologic and Pediatric Disease and Birth Defects of Ministry of Education, Sichuan University, Chengdu, China

3Quality Assessment Office, Nursing Department, West China Second University Hospital, Sichuan University, Chengdu, China

*all authors contributed equally

Corresponding Author:

Dehua Li, MPhil, MBBS

Quality Assessment Office, Nursing Department

West China Second University Hospital

Sichuan University

No 20, Section 3, Renmin South Road

Wuhou District

Chengdu

China

Phone: 86 1 808 684 1792

Email: 562372162@qq.com


Background: Big data technology provides unlimited potential for efficient storage, processing, querying, and analysis of medical data. Technologies such as deep learning and machine learning simulate human thinking, assist physicians in diagnosis and treatment, provide personalized health care services, and promote the use of intelligent processes in health care applications.

Objective: The aim of this paper was to analyze health care data and develop an intelligent application to predict the number of hospital outpatient visits for mass health impact and analyze the characteristics of health care big data. Designing a corresponding data feature learning model will help patients receive more effective treatment and will enable rational use of medical resources.

Methods: A cascaded depth model was successfully implemented by constructing a cascaded depth learning framework and by studying and analyzing the specific feature transformation, feature selection, and classifier algorithm used in the framework. To develop a medical data feature learning model based on probabilistic and deep learning mining, we mined information from medical big data and developed an intelligent application that studies the differences in medical data for disease risk assessment and enables feature learning of the related multimodal data. Thus, we propose a cascaded data feature learning model.

Results: The depth model created in this paper is more suitable for forecasting daily outpatient volumes than weekly or monthly volumes. We believe that there are two reasons for this: on the one hand, the training data set in the daily outpatient volume forecast model is larger, so the training parameters of the model more closely fit the actual data relationship. On the other hand, the weekly and monthly outpatient volume is the cumulative daily outpatient volume; therefore, errors caused by the prediction will gradually accumulate, and the greater the interval, the lower the prediction accuracy.

Conclusions: Several data feature learning models are proposed to extract the relationships between outpatient volume data and obtain the precise predictive value of the outpatient volume, which is very helpful for the rational allocation of medical resources and the promotion of intelligent medical treatment.

JMIR Med Inform 2021;9(4):e19055

doi:10.2196/19055

Keywords



Over the past two decades, there has been dramatic growth in the amount of data being generated in many areas worldwide, including health care data, sensor data, various types of user-generated data, internet data, and financial company data. Big data is emerging as the amount of data in every field grows; however, “big data” is an abstract concept that does not simply mean a large collection of data. Big data has some features that are different from data sets, and its characteristics differ from those of massive data and large data sets. Research studies examining concepts such as the Internet of Things and wearable technology have helped reduce the cost of real-time monitoring of human health, which has driven development in this industry [1]. Big data technology provides unlimited potential for efficient storage, processing, querying, and analysis of medical data. Technologies such as deep learning and machine learning simulate human thinking, assist physicians in diagnosis and treatment, provide personalized health services, and promote intelligent processes of health care applications. The development and application of the Internet of Things, wireless networks, the internet, cloud computing technology, etc., provide guarantees for the analysis, processing, and transmission of big data. In short, in the field of health care, the rapid development of big data analysis, wearable technology, artificial intelligence, Kyrgyz computing, supercomputing technology, etc., all provide possibilities for the realization and development of smart medical applications [2,3].

Medical health data is multimodal, complex data that continues to grow rapidly and contains a wealth of information. The challenges associated with medical health data include how to quickly and accurately collect medical health data and how to efficiently use high-speed networks to reliably and efficiently transmit medical health data [4]. Other challenges include the use artificial intelligence–related machine learning and deep learning techniques to extract useful information from health medical big data and the development of intelligent applications for medical staff and ordinary people. In this paper, our aims included analyzing health care data, addressing intelligent application–related issues, predicting the number of hospital outpatient visits for mass health impacts, and analyzing the characteristics of health care big data. Designing a corresponding data feature learning model will help patients receive more effective treatment and enable rational use of medical resources.


Cascaded Deep Learning Model

Model Framework

Deep learning is a process of feature extraction and combination. Through multilayer nonlinear operation combination, the model can abstract high-order semantic information as data. In practice, a cascaded multilayer operation is performed on the preprocessed raw data. Each layer consists of three sublayers: feature extraction, nonlinear transformation, and feature selection. The input of the initial layer is the pretreated original data, the input of the second layer is the output of the upper layer, and the output of the third layer is the final abstract representation of the data. In each layer, the representation characteristics of the data are extracted by feature transformation, which is generally a process of dimension increasing [5]. Compared with the input, the transformed features have certain representative characteristics; however, the number of features is much greater. Through feature selection, we reduce the dimension once, and at the same time, we choose the discriminant feature or the representation feature. Some feature selection methods are used to improve the regional adaptability of the model, such as max-pooling and mean-pooling operations in convolutional neural networks. Nonlinear transformation is usually performed in the middle of feature transformation and feature selection, and it is an important part of the framework. Nonlinear transformation can imitate the activation and inactivation of neurons. Another important application of nonlinear transformation is when linear transformation is used in the feature extraction. Multilayer linear transformation is still a linear transformation. Multilayer operation plays the same role as learning a linear transformation directly and does not play a role of layer-by-layer abstraction.

In this paper, to eliminate possible differences in measurement scales between different features, the original data were normalized by the maximum and minimum method. In the feature extraction stage, to obtain the structural information of the data, a method of feature transformation with little relationship to the domain was selected, that is, the vanishing component analysis (VCA) method was used to extract the polynomial characteristics of the data. Because the VCA method itself is a nonlinear transformation, the nonlinear operation cannot be carried out in the framework [6]. Because the dimension of the VCA input data cannot be too high, we used principal component analysis (PCA) to reduce the dimensions of the VCA input data (PCA reduces the dimension of the loss of information and can reduce the feature dimensions; therefore, it can also be regarded as a special feature selection method). In the final classification, we used a boosting algorithm, which is a method that can classify and select features; in the classification of the use of the features, we could use all the features learned at each level or only use the last level of features. At this point, our depth feature learning framework was formed, as shown in Figure 1.

Figure 1. The cascaded deep learning framework. IVI: innovation value intuitive; AdaBoost: adaptive boost; PCA: principal component analysis; RSR: regularized self-representation; VCA: vanishing component analysis.
View this figure
Characteristic Learning

According to the existing cascaded depth learning framework, we implemented a specific depth model. Therefore, we needed to study and explain the training process of the model. The cascade depth model proposed in this paper is a multilayer structure. Between different layers, the output of the upper layer is the input of the next layer. Within the same layer, a PCA dimensionality reduction stage, VCA feature transformation stage, and L2,p – RSR feature selection stage, where RSR is regularized self-representation, are included. Each layer of the model can learn the output features of the current layer [7]. The abstract information of the different layers is different. The features of the lowest layer are closest to the original feature space. The high-level features can provide complementary information for the low-level features. We made full use of the characteristics learned from all levels and proposed an effective feature combination method. Finally, we used the boosting classifier based on the binary classification problem and extended its success to the multiclassification problem.

PCA Dimension Reduction Stage

VCA feature transformation requires stringent data space dimension control; therefore, it was necessary to reduce the dimension before the VCA feature transformation. There are two ways to reduce the dimension of PCA; one is to specify the reserved dimension directly, and the other is to set the ratio of an eigenvalue to the total sum of the eigenvalues. Proportion setting can theoretically control the retention percentage of data information; however, it is challenging to control the retained feature dimension using this method, especially when the feature dimension is difficult to control. In this model, the VCA transform will produce a large number of features, and its input space needs to strictly control the feature dimension. Therefore, we directly set the reservation dimension of the PCA transform [8]. Because different dimensions retain different information, the performance of the model and the experimental results are affected, and this effect is not positively correlated with the retention dimension. Therefore, in our experiments, it was necessary to debug the PCA dimension.

VCA Feature Transformation Stage

The VCA method can map raw data into zero polynomial space, thus playing the role of feature extraction. Using this method, it is not necessary to know the domain of data usage or other prior knowledge because as long as the input space is a real matrix, we can learn its zero space polynomial transformation representation. This transformation method can not only extract the linear features in the data samples but can also extract the nonlinear features, that is, if the first-order polynomial contains the zero space of the data, the polynomial contains the linear information of the data. Other polynomials with different numbers can extract 2 or even higher order nonlinear characteristics of data. In this model, the use of VCA for feature transformation involves two specific problems: (1) polynomial number setting and initial feature dimension setting; (2) algorithm solution using singular value decomposition with the minimum setting.

L2,p – RSR Feature Selection Phase

VCA feature transformation will produce a large number of features, and we need to select features for many reasons. Moreover, feature selection is one of the reasons why the depth learning model is effective, as it can effectively select task-related features. The L2,p – RSR method proposed in this paper can not only effectively select features that play important roles in the linear representation of features but can also exclude the roles of singular samples due to the use of L2,1 norm constraint loss terms. This method is based on the self-representation property of the feature space. Any matrix space data possesses this property; therefore, it is domain-independent, which meets the requirements of the generalization ability of our model. In this model, we only needed to set the P value of L2,p norm and regularization parameter λ value in the method. The input of the L2,p – RSR feature selection operation is the output space after VCA transformation, and the output is the output feature of the current layer of the depth model.

Boosting Classification and Feature Selection

After features are learned, it is necessary to classify them. There are many general classification methods, of which the nearest neighbor classifier is the simplest. Support vector machine (SVM) classifiers are also widely used in research and applications, and kernel-based SVM classifiers can also solve nonlinear problems. For this model, we used a classification method with a feature selection function: the Gentle Adaptive Boosting (AdaBoost) classifier based on a pile function. The Gentle AdaBoost algorithm based on a pile function can not only classify but can also select features from the feature space, that is, it can select one feature from each feature space and classify it, and it can select features by controlling the number of weak classifiers [9]. This is in good agreement with the framework proposed in this paper. The Gentle AdaBoost algorithm can not only play a classification role but can also perform feature selection. It can also be used only as a classification method or for feature selection. Next, we analyzed and implemented the boosting classifier based on a pile function.

Brief Summary of the Discrete AdaBoost Algorithm Based on a Pile Function

For the discrete AdaBoost algorithm based on a pile function, the inputs are training sample X and tag Y, and the output is the classifier model .

Step 1: Weight matrix initialization: wi = 1/m,i = 1, …, m

Step 2: Repeat: t= 1, 2, …, T

Step 3: For d = 1, …, n, do: (errd, δd, ad, bd) =

Step 4: feaId = ar g min (errd), (feaId, δ, a, b) = (feaId,δ feaId, afeaId, bfeaId)

Step 5: ft(x) = ah(x feaId > δ) + b

Step 6: Update: , i = 1, 2, …, m, standardization of w makes

Step 7: Repeating end

The Gentle AdaBoost algorithm based on the pile function follows the boosting algorithm framework and uses a simple classifier model:

in which the weak classifier fm is defined as

The h function is the indicator function. represents the d dimension characteristic of the i sample. δ is the threshold (the so-called pile). a and b are parameters of the linear regression function. When learning the weak classifier, each feature of the sample learns a pile function based on the least squares, and the error value of the least squares is obtained and recorded; then, the corresponding feature is selected when the error value is the minimum. Therefore, (d, δ, a, b) can be obtained through the weighted least squares method:

After obtaining the weak classifier Fm, the weights of W are updated:

The F function is updated to F = F + f. The final classification result is sign (F(x)). The absolute value of the F(x) value provides the credibility of the classification.

Feature Combination

Through the research and analysis of each stage of the cascade depth model, we successfully constructed a multilevel depth learning model and learned multilevel features. The different layer features provide different information. The underlying feature is the closest to the real information of the data and contains the most data information; however, it contains less semantic information and is not sufficiently abstract. When a model has two or three layers, it abstracts the features at different levels, including certain semantic information. When the number of layers is higher, the model contains a higher level of abstract information. We believe that each layer of features is useful; the underlying features ensure that the data information distortion is not too great, and the high-level features can provide the underlying features with complementary structure information that the underlying features do not possess [10]. If we can effectively combine each layer of features, we will obtain a good machine learning model. Using the combination of Gentle AdaBoost feature selection and classification functions, in this section, we propose an effective method of combining each layer of features and classifying them using the combination features [11-14]. The combination of the two levels of features is shown in Figure 2.

The classifier we used in the cascaded depth model is the Gentle AdaBoost classifier with a feature selection function. Therefore, we input the features of the different layers into the classifier algorithm to classify them and achieve the purpose of combining the features of each layer. The Gentle AdaBoost classifier is in the form of . Assuming that the feature space of the first layer is X1 and the feature space of the second layer is X2, a strong classifier is learned from the features of the first layer. Because the classification algorithm gives the current weight W of each sample after each weak classification, it can initialize the sample weight before F2 training by using the weight distribution W of the sample at the end of F1 after F1 has been generated. Then, we learn the classifier , and the final classification result is F1 + F2. F1 uses only the first layer feature and F2 uses only the second layer feature; however, the sample weight W is used in the F1 process.

Figure 2. The combination of two levels of features used in our model.
View this figure

Data Analysis

We used the Letter, Pendigit, and USPS data sets to conduct comparative experiments [15-18]. Specifically, the example, feature, and class labels are 20,000, 16, and 26 for the Letter data set, 10,992, 16, and 10 for the Pendigit data set, and 9298, 256, and 10 for the USPS data set, respectively. From Table 1, we can see that the data pair classification accuracy was low; we selected the lowest number, and the corresponding sequence was (31 41 45 125 128 157 164 173 304). The bandwidth of the first layer PCA was (4810 1316), that of the second layer was (358 11 13 15 20) and that of the third layer as (358 11 13 15). The classification accuracy of the selected data under one level of PCA is shown in Table 1. As can be seen from the table, the optimal value was PCA1=13. With a fixed PCA1 of 13, the second layer under different PCA2 values of classification accuracy is shown in Table 2; the optimal value was PCA2=11. Moreover. the third layer results are shown in Table 2, and the corresponding PCA3 was 13. In actual bandwidth settings, PCA reserve values can be set for several more groups because Letter data have a smaller sample size per class. Finally, each layer used the PCA reservation dimension (13 11 13) setting to obtain the total classification accuracy of Letter data under each layer. Tables 1-2 also show that the classification accuracy increases with the increase of the number of layers when the model classifies Letter data with attribute values. Figure 3 and Figure 4 show the classification accuracies of the Pendigit and USPS data sets at different layers, respectively.

Table 1. Classification accuracy of the first layer of Letter data with different PCA1 retention values (%).
Data serial numberPCA1a retention value

48101316
3192.1098.6399.6299.6299.13
4185.9394.5894.5897.5998.51
4597.1398.4698.8998.8997.90
12583.8794.2096.7898.5298.52
12889.9295.9696.9899.6599.65
15784.5996.3394.9297.8795.89
16494.1095.1297.4298.8099.27
17391.8395.1196.8998.5897.30
30485.6696.9299.29100.00100.00
Average value89.6096.1897.2898.8998.72

aPCA1: principal component analysis 1.

Table 2. Classification accuracy of Letter data under different PCA2 retention values (%).
Data serial numberPCA2a retention value

358111315
3199.6299.6299.6299.1399.6299.62
4197.5997.9997.5998.5998.5997.89
4599.7899.7899.2399.7899.2399.78
12599.3999.3999.3999.3999.8299.25
12899.65100.0099.6599.6599.6599.65
15798.9498.9498.9498.9498.4998.94
16499.1999.1999.7398.7399.1999.19
17398.9198.9198.9198.9198.9198.58
304100.00100.00100.00100.00100.00100.00
Average value99.5999.6999.5899.5999.7599.69

aPCA2 : principal component analysis 2.

Figure 3. Classification accuracy of the Pendigit data set at different layers.
View this figure
Figure 4. Classification accuracy of the USPS data set at different levels.
View this figure

Prediction of Hospital Outpatient Volume Based on the RNN-RBM Network Model

Prediction Model Construction

Recurrent neural networks (RNNs) and restricted Boltzmann machine (RBM) networks have strong ability to predict time series. To make use of the advantages of these two network models to predict hospital outpatient volumes, the two networks were combined to form a depth structure network RNN-RBM model. This deep neural network can describe the time dependence of high-dimensional sequences.

Among the various deformations of RBM, there is a deformation called conditional RBM. Conditional RBM is different from standard RBM in that it adds two types of connections: one is an autoregressive connection, and the other is a hidden layer connection between the previous time step and the current time step. A conditional RBM can also be trained by a contrastive divergence (CD) algorithm. This structure can be used to handle time series data effectively; therefore, it can be used to solve time series problems. The RNN-RBM model in this paper can also refer to conditional RBM. and represent the offset vectors of the visible layer and the hidden layer in the RBM model at time t, and they are updated by the hidden unit ut – 1 in the RNN model at time t – 1. Weight matrices Wuv and Wuh are used to connect the RNN network model and RBM network model. The bias vector can be expressed as follows:

where bv and bh are the initial offset values of the visible and hidden layers in the RBM network model. The RNN network model expands gradually with the time step and is used to generate the state of the hidden units in the RBM network model, which are based on the input layer v(t) and the hidden layer u(t) in the RNN network model. In this way, the hidden layer can only blame the activation function of the hidden unit.

u(t) = sigmoid (bu+Wuuu(t-1)+Wvuv(t))

From this, we can see that the overall process of the algorithm is:

  1. The hidden unit in the RNN model is activated.
  2. u(t-1) is used in Step 1 to update the bias values in the RBM network model.
  3. The parameters are updated in the RBM network model.
  4. The RBM output is used as the input of the prediction layer, and the parameters are initialized randomly.
  5. The backpropagation (BP) method is used to fine-tune the model from top to bottom. Error values are propagated back to the RBM and RNN network models. The weight matrices wuv and wuh are updated, and the RNN network models are trained to predict.

The experimental simulation data were obtained from real-world hospital outpatient volume data. In the data pretreatment stage, in this paper, we extracted the outpatient volume information of each department of the hospital and performed certain statistical processing methods. In the actual processing data, the outpatient data for hospital holidays were significantly reduced; to avoid the impact of these data, we deleted the related statistical processing methods.

To better satisfy the prediction function, these data were counted according to the three time intervals of day, week, and month. To better conform to the prediction model based on the depth neural network proposed in this paper, these data were processed and expressed as a data matrix. The matrix is as follows:

in which represents the outpatient volume data of department n in interval T.

In this paper, RNN and RBM neural networks were combined to form a deep-seated neural network model, and the model was used to predict the outpatient volume of the hospital. In the actual simulation, we selected the outpatient volume data of 15 important outpatient clinics as the input of the depth model, that is, the input layer of the model was set to 15. The number of hidden layer neurons in the RNN was set to 20, the number of hidden layer neurons in the RBM was set to 30, and the output of the predicted layer was 15.

Figure 5 and Figure 6 show the prediction results of the RNN-RBM model. In the simulation experiment, the outpatient volume data were trained in different intervals (daily outpatient volume, weekly outpatient volume, and monthly outpatient volume). From the simulation results, it can be seen that the forecast of the daily outpatient volume is closer to the real value than that of weekly outpatient volume and monthly outpatient volume. The depth model created in this paper is more suitable for the daily outpatient volume forecast, and we analyzed the causes of this phenomenon. We believe that there are two reasons for this result. On the one hand, the amount of training data in the daily outpatient volume forecast model is larger; therefore, the training parameters of the model more closely fit the actual data relationship. On the other hand, the weekly and monthly outpatient volume is the cumulative daily outpatient volume; therefore, the errors caused by the prediction will gradually accumulate, and the greater the interval, the lower the prediction accuracy.

In practical applications, medical managers often require more short-term outpatient volume forecasts. Because forecasting a shorter interval of outpatient volume can provide support for medical management, this method still has certain advantages in practical applications.

We compared the outpatient volume forecasting method based on the RNN-RBM depth model with existing popular forecasting algorithms, and the results are shown in Table 3. Here, the prediction algorithm based on the auto regressive integrated moving average (ARIMA) model, the BP neural network prediction algorithm, the radial basis function (RBF) neural network prediction algorithm, and the SVM algorithm were selected.

Compared with the current popular outpatient volume prediction algorithm, it can be readily concluded that the prediction algorithm based on the RNN-RBM depth model is superior to other current prediction algorithms for the daily outpatient volume, weekly outpatient volume, and monthly outpatient volume, and the prediction accuracy is relatively high.

Figure 5. Daily outpatient volume forecast results.
View this figure
Figure 6. Weekly outpatient volume forecast results.
View this figure
Table 3. Comparison of outpatient volume prediction methods.
Method and errorDaily outpatient volumeWeekly outpatient volumeMonthly outpatient volume
RNN-RBMa

MAEb24.832395.111966.6
MREc (%)3.94.65.9
ARIMAd

MAE37.93562.719525.5
MRE (%)5.46.16.8
BPe

MAE53.484145.52549.5
MRE (%)7.38.99.6
SVMf

MAE44.33796.92647.8
MRE (%)6.47.99.4
RBFg

MAE38.93586.322978
MRE (%)5.77.19

aRNN-RBM: recurrent neural network-restricted Boltzmann machine.

bMAE: mean absolute error.

cMRE: mean relative error.

dARIMA: auto regressive integrated moving average.

eBP: backpropagation.

fSVM: support vector machine.

gRBF: radial basis function.


Big data in the field of health care is an integral part of the strategic layout of national big data, and the analysis and mining of valuable information is also related to the development of national health care. At present, the problems that must be solved in the analysis and application of health care data include timely and accurate collection and acquisition of health care data as well as efficient use of high-speed networks for reliable transmission of health care–related digital, image, voice, and other information. Machine learning and in-depth learning technology related to artificial intelligence can be used to mine useful information from health care–related big data and develop intelligent applications for medical staff and ordinary people. In this paper, we studied a feature learning model of medical health data based on probabilistic and in-depth learning mining, mining information from medical big data and addressing intelligent application–related problems, and we studied the differences between medical risk assessment–related data and general big data, multimodal data feature representation, and learning–related content. Several data feature learning models are proposed to extract the relationship between the outpatient volume data and to obtain precise predictive value of outpatient volume, which is very helpful to the rational allocation of medical resources and the promotion of intelligent medical treatment.

Acknowledgments

The authors acknowledge the Soft Science Project of the Sichuan Science and Technology Department (Grant: 2017ZR0169).

Conflicts of Interest

None declared.

  1. Nie L, Zhao Y, Akbari M, Shen J, Chua T. Bridging the Vocabulary Gap between Health Seekers and Healthcare Knowledge. IEEE Trans Knowl Data Eng 2015 Feb 1;27(2):396-409. [CrossRef]
  2. Lewis D, Pluye P, Rodriguez C, Grad R. Mining reflective continuing medical education data for family physician learning needs. J Innov Health Inform 2016 Apr 06;23(1):834-440 [FREE Full text] [CrossRef] [Medline]
  3. Shen D, Zhang D, Young A, Parvin B. Machine Learning and Data Mining in Medical Imaging. IEEE J Biomed Health Inform 2015 Sep;19(5):1587-1588. [CrossRef] [Medline]
  4. Wong KKL, Wang D, Fong S, Ng EYK. A Special Section on Advanced Computing Techniques for Machine Learning and Data Mining in Medical Informatics. J Med Imaging Hlth Inform 2016 Aug 01;6(4):1052-1055. [CrossRef]
  5. Van Eaton EG, Devlin AB, Devine EB, Flum DR, Tarczy-Hornoch P. Achieving and sustaining automated health data linkages for learning systems: barriers and solutions. EGEMS (Wash DC) 2014;2(2):1069 [FREE Full text] [CrossRef] [Medline]
  6. Sun W, Cai Z, Li Y, Liu F, Fang S, Wang G. Data Processing and Text Mining Technologies on Electronic Medical Records: A Review. JHE 2018 Apr 08;2018:1-9. [CrossRef]
  7. Kusy M, Zajdel R. Probabilistic neural network training procedure based on Q(0)-learning algorithm in medical data classification. Appl Intell 2014 Aug 6;41(3):837-854. [CrossRef]
  8. Bashir MEA, Shon HS, Lee DG, Kim H, Ryu KH. Real-Time Automated Cardiac Health Monitoring by Combination of Active Learning and Adaptive Feature Selection. KSII TIIS 2013 Jan 29;7(1):99-118. [CrossRef]
  9. Henriques R, Antunes C, Madeira SC. Generative modeling of repositories of health records for predictive tasks. Data Min Knowl Disc 2014 Nov 12;29(4):999-1032. [CrossRef]
  10. Bos JW, Lauter K, Naehrig M. Private predictive analysis on encrypted medical data. J Biomed Inform 2014 Aug 14;50:234-243 [FREE Full text] [CrossRef] [Medline]
  11. Wu S. A Traffic Motion Object Extraction Algorithm. Int. J. Bifurcation Chaos 2016 Jan 14;25(14):1540039. [CrossRef]
  12. Wu S, Wang M, Zou Y. Research on internet information mining based on agent algorithm. Future Generation Computer Systems 2018 Sep;86:598-602. [CrossRef]
  13. Ke Q, Wu S, Wang M, Zou Y. Evaluation of Developer Efficiency Based on Improved DEA Model. Wireless Pers Commun 2018 Feb 7;102(4):3843-3849. [CrossRef]
  14. Wu S, Wang M, Zou Y. Sewage information monitoring system based on wireless sensor. Desalination Water Treat 2018 Jul;121:73-83. [CrossRef]
  15. Cour T, Sapp B, Taskar B. Learning from Partial Labels. The Journal of Machine Learning Research 2011 Jul;12:1501-1536 [FREE Full text] [CrossRef]
  16. Chen Y, Patel V, Chellappa R, Phillips P. Ambiguously Labeled Learning Using Dictionaries. IEEE Trans.Inform.Forensic Secur 2014 Dec;9(12):2076-2088. [CrossRef]
  17. Liu L, Dietterich T. A conditional multinomial mixture model for superset label learning. In: Bartlett P, Pereira FCN, Burges CJC, Bottou L, Weinberger KQ, editors. Advances in Neural Information Processing Systems 25. Cambridge, MA: MIT Press; 2012:557-565.
  18. Zhang ML, Zhou BB, Liu XY. Partial Label Learning via Feature-Aware Disambiguation. In: KDD '16: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2016 Aug Presented at: 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; August, 2016; San Francisco, CA p. 1335-1344.


AdaBoost: adaptive boosting
ARIMA: auto regressive integrated moving average
BP: backpropagation
CD: contrastive divergence
PCA: principal component analysis
RBF: radial basis function
RBM: restricted Boltzmann machine
RNN: recurrent neural network
RSR: regularized self-representation
SVM: support vector machine
VCA: vanishing component analysis


Edited by Z Du, K Kalemaki, H Li; submitted 02.04.20; peer-reviewed by E Marcs, H Wang, J Fu; comments to author 14.04.20; revised version received 08.05.20; accepted 08.05.20; published 08.04.21

Copyright

©Yuanlin Yang, Dehua Li. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 08.04.2021.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on http://medinform.jmir.org/, as well as this copyright and license information must be included.