Original Paper
Abstract
Background: Surveillance of ectopic pregnancy (EP) using electronic databases is important. To our knowledge, no published study has assessed the validity of EP case ascertainment using electronic health records.
Objective: We aimed to assess the validity of an enhanced version of a previously validated algorithm, which used a combination of encounters with EP-related diagnostic/procedure codes and methotrexate injections.
Methods: Medical records of 500 women aged 15-44 years with membership at Kaiser Permanente Southern and Northern California between 2009 and 2018 and a potential EP were randomly selected for chart review, and true cases were identified. The enhanced algorithm included diagnostic/procedure codes from the International Classification of Diseases, Tenth Revision, used telephone appointment visits, and excluded cases with only abdominal EP diagnosis codes. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and overall performance (Youden index and F-score) of the algorithm were evaluated and compared to the validated algorithm.
Results: There were 334 true positive and 166 true negative EP cases with available records. True positive and true negative EP cases did not differ significantly according to maternal age, race/ethnicity, and smoking status. EP cases with only one encounter and non-tubal EPs were more likely to be misclassified. The sensitivity, specificity, PPV, and NPV of the enhanced algorithm for EP were 97.6%, 84.9%, 92.9%, and 94.6%, respectively. The Youden index and F-score were 82.5% and 95.2%, respectively. The sensitivity and NPV were lower for the previously published algorithm at 94.3% and 88.1%, respectively. The sensitivity of surgical procedure codes from electronic chart abstraction to correctly identify surgical management was 91.9%. The overall accuracy, defined as the percentage of EP cases with correct management (surgical, medical, and unclassified) identified by electronic chart abstraction, was 92.3%.
Conclusions: The performance of the enhanced algorithm for EP case ascertainment in integrated health care databases is adequate to allow for use in future epidemiological studies. Use of this algorithm will likely result in better capture of true EP cases than the previously validated algorithm.
doi:10.2196/18559
Keywords
Introduction
Use of claims, administrative databases, and electronic health records (EHRs) allows for efficient identification of individuals with medical conditions [
]. National hospital databases and discharge diagnoses have been used extensively to monitor serious medical conditions leading to significant morbidity such as acute myocardial infarction; however, hospital databases are not sufficient in capturing serious conditions that do not necessarily require hospitalization. Ectopic pregnancy (EP), the implantation of a fertilized ovum outside of the endometrial cavity, is a serious condition that can be life threatening; however, a significant proportion of patients can be managed in the outpatient setting. Trends in EP are difficult to examine because women with EPs are increasingly managed in the outpatient setting, either medically with methotrexate injection(s) or surgically with laparoscopy [ , ]. Furthermore, women with potential EPs may be evaluated over the course of several days and medical encounters prior to the establishment of a definitive diagnosis of EP or viable or nonviable intrauterine pregnancy, making identification of true cases difficult.Researchers have typically relied on clinical diagnosis and procedure codes extracted from outpatient care and hospital discharge databases to describe trends in EP. However, the accuracy of EP case ascertainment and the validity of study findings depend on the types of data sources and completeness of EP case ascertainment approaches. One methodology for EP case ascertainment was validated in a study by Scholes et al in 2011 [
], using claims and administrative data extracted from a large health care maintenance organization database prior to the use of EHRs and codes from the International Classification of Diseases, Tenth Revision (ICD-10). Although the sensitivity of the algorithm for capturing EP cases was higher than that of the use of standard codes, the algorithm is inherently limited by the time frame of the study, the completeness of the data, and the ability to review patients' medical information in an electronic database for true case ascertainment [ - ].The widespread adoption of EHRs in the United States presents an opportunity to improve patient care [
, ] and provides researchers unparalleled possibilities to conduct high-quality clinical and pharmacoepidemiologic research [ , ]. EHRs provide access to more reliable and comprehensive patient health information. They are also easily transferable to other EHR systems and more cost-efficient than paper-based data sources [ - ]. Over the last decade, there have been a number of studies that evaluated the accuracy of health data (hospital discharge data, outpatient encounter data, and claims data) extracted from the EHRs of various regions of the Kaiser Permanente health care system [ - ] and other health care systems [ , ]. Published validation studies investigated demographic characteristics [ ], body weight and height data [ ], perinatal outcomes [ , ], phenotype for genomic study [ ], and phenotype of HIV infection [ ]. However, to our knowledge, there is no study that has assessed the validity of EP case ascertainment using EHRs for validation and the potential impact of changes in the data over time (pre-EHR vs EHR era). There is substantial practice pattern variation over time, across institutions and health care providers. The Scholes et al algorithm was developed 10 years ago at two institutions with potentially different practice environments than the setting of this study. Furthermore, the data for the Scholes et al algorithm came largely from contracting hospitals for inpatient care, which may have disparate practice and coding patterns. Therefore, validating the algorithm in a different time frame and setting is necessary to conduct future studies describing the temporal trends of EP incidence and treatment modalities. This study aimed to develop an enhanced algorithm that builds on the previously validated algorithm [ ].Methods
Kaiser Permanente Northern California (KPNC) and Southern California (KPSC) are the two largest Kaiser Permanente regions of the nine regional entities in the United States. These integrated health care systems provide health care service to over 9 million racially and ethnically diverse members who receive their care mainly from KP physicians and allied staff in 36 hospitals and over 427 medical centers scattered throughout California. Both KPSC and KPNC access the Virtual Data Warehouse, which was created to facilitate multi-site research projects. KP health care staff in both outpatient and inpatient clinical settings utilize an EHR based on an Epic platform that is accessible to multiple health care providers at the same time and in multiple locations. KPSC and KPNC fully implemented the EHR system for both outpatient care encounters and inpatient services in 2008 and 2009, respectively. It is a highly sophisticated integrated health information management and care management system designed to enhance the quality of patient care. The data is collected in real time with patient-centered records that provide access to comprehensive patient information to clinicians and researchers more instantly, efficiently, and securely compared with pre-EHR era paper records.
We developed an enhanced algorithm to identify EPs in the two health care systems through several iterative steps: First, we incorporated corresponding ICD-10 diagnostic and procedure codes that were not in use when the Scholes et al algorithm was developed in 2011. We then chart reviewed an initial random sample of 100 cases (50 KPNC and 50 KPSC) that had at least one EP diagnostic or procedure code but were not classified as EP by the Scholes et al algorithm to understand the reasons for misclassification. This information was used to modify the Scholes et al algorithm to improve the accuracy of case ascertainment. In addition to the inclusion of ICD-10 diagnostic/procedure codes, the major changes that were made to the previously validated algorithm as a result of our initial chart review were the addition of a new source of information (telephone appointment visits [TAVs]), the exclusion of cases with only abdominal EP diagnosis codes, additional criteria of a combination of an EP diagnostic and procedure code to be considered a case, refinement of methotrexate medication codes that were considered valid, and expansion of the allowable days from the assigned EP diagnosis date to administration of methotrexate.
The final enhanced algorithm (
) that was developed required either (1) at least 2 encounters, including at least 1 in-person visit, with an EP code other than abdominal EP (abdominal codes O00.00 and O00.01); (2) at least 2 TAVs with an EP code and evidence of methotrexate use; (3) at least 1 outpatient or inpatient visit or outside claims visit with any of the specific ICD, Ninth Revision (ICD-9), or ICD-10 diagnostic codes 633.10, 633.11, O00.10, and O00.11; (4) a combination of any single encounter (outpatient or inpatient visit, outside claims visit, or TAV) with a nonspecific EP code plus evidence of methotrexate use; or (5) a single non-TAV encounter with both an EP diagnosis and procedure code on the same encounter.The EP diagnosis date was defined as the date of the first encounter with an EP code. Multiple encounters with EP codes occurring within a 180-day period from the first encounter with an EP code were considered part of the same pregnancy episode. Methotrexate use was defined as a medication code found within 30 days prior to and 180 days after the first EP diagnosis date. The justification for relaxing the criteria for methotrexate administration to 30 days prior to the first diagnosis, in contrast to the 7 days allowed in Scholes et al algorithm, was to minimize misclassification of treatment status due to inaccurate assignment of EP diagnosis dates. In randomly selected chart abstractions, we also found that methotrexate medication codes had various administrative subcodes that corresponded with true use of methotrexate; hence, we had to specify medication administration subcodes.
To assess the validity of the previously validated algorithm by Scholes et al and the newly developed enhanced version of the algorithm against the gold-standard “true case” as determined by chart review, a random sample of 600 patients (300 at each site) with a potential EP was selected. A potential case was defined as any case with at least 1 ICD-9, ICD-10, or Current Procedural Terminology code for EP (
). This approach was chosen because, in our setting, as in most health care settings that rely on insurance reimbursement, it is unlikely for an EP case to not have documentation with either a diagnosis or procedural code. Therefore, we assumed that cases that did not meet the initial inclusion criteria would be very unlikely to be a true EP case. By limiting the sample to cases with these inclusion criteria, we increased the number of true cases with little risk of missing cases. Further inclusion criteria were applied (women who were aged 15 to 44 years from January 1, 2009, to December 31, 2018, and were enrolled in the health plan for at least 1 month over the study period) to the 600 randomly selected cases. Cases that did not meet these requirements were excluded, leaving 255 cases at KPSC and 276 at KPNC. We randomly selected 250 cases from each site for chart review for this validation study ( ).Using a standardized abstraction form, chart reviews were performed by trained abstractors to identify true EP cases. Cases where EP status was unclear were identified and adjudicated by a clinician. In our analysis of preliminary data pulls, we found that 10.5% (1568/14,907) of EP cases identified using the Scholes et al algorithm for classification could not be clearly classified as either medical or surgical. Therefore, information on treatment modality (surgical vs medical) was collected to assess the level of agreement. EP cases were classified as surgically managed if the patient had undergone any EP removal surgery within 30 days of the first encounter with an EP code, regardless of whether the patient received methotrexate. Remaining EP cases were classified as medically treated if the patient received methotrexate for an EP. Cases for which the type of treatment could not be determined were considered unclassified.
The test performance of both algorithms was calculated on the 500 potential EP cases: sensitivity (percentage of chart review–confirmed cases that were correctly classified as EP by the algorithm), specificity (percentage of cases determined not to be EP by chart review that were correctly classified by the algorithm), positive predictive value (PPV; percentage of cases classified as EP by the algorithm that were confirmed by chart review), and negative predictive value (NPV; percentage of identified cases classified as not EP by the algorithm that were determined not to be EP cases from chart review). Furthermore, the overall test performance of a dichotomous diagnostic test was assessed using the Youden J statistic [
] (Youden index=sensitivity+specificity–1), and the weighted harmonic mean of the test's precision and recall were assessed by computing the F-score (2×[PPV×sensitivity]/[PPV+sensitivity]). Agreement in case identification between the Scholes et al and the enhanced algorithms was assessed using kappa (κ) statistics. In addition, we evaluated the performance of electronic abstraction in correctly identifying EP management type (medical or surgical) among confirmed EP cases compared to that of chart review using the same performance measures. Lastly, we conducted a sensitivity analysis calculating the same performance measures using the Scholes et al algorithm and enhanced algorithm for a subset of cases from 2009 to the end of 2014 (ICD-9–only cases).Results
shows the distribution of maternal characteristics among the study sample and the two study sites (KPSC and KPNC) from which the sample for this validation study was drawn. Only a small proportion of the women in the sample population were teens and over a third were Hispanic. There was a higher proportion of Hispanic members at KPSC than at KPNC and a higher proportion of non-Hispanic White and Asian/Pacific Islander members at KPNC than at KPSC. Only a small proportion of women in the sampled cohort lived in neighborhoods with a median annual household income below US $30,000. Although the distribution of maternal characteristics is largely comparable between the sampled population and the overall cohort, women in the sampled population were slightly more likely to be from non-Hispanic Black backgrounds and less likely to be from non-Hispanic White racial/ethnic backgrounds.
Characteristics | Sample (n=500) | KPSCa and KPNCb populations | |||
Chart reviewed, n (%) | Overall, n (%) (N=19,615) | KPSC, n (%) (n=9823) | KPNC, n (%) (n=9792) | ||
Maternal age (years) | |||||
<20 | 22 (4.4) | 668 (3.4) | 353 (3.6) | 315 (3.2) | |
20-29 | 169 (33.8) | 7036 (35.9) | 3643 (37.1) | 3393 (34.7) | |
30-34 | 157 (31.4) | 6073 (31.0) | 2970 (30.2) | 3103 (31.7) | |
≥35 | 152 (30.4) | 5838 (29.8) | 2857 (29.1) | 2981 (30.4) | |
Race/ethnicity | |||||
Non-Hispanic White | 124 (24.8) | 5458 (27.8) | 2257 (23.0) | 3201 (32.7) | |
Non-Hispanic Black | 82 (16.4) | 2579 (13.1) | 1298 (13.2) | 1281 (13.1) | |
Hispanic | 199 (39.8) | 7668 (39.1) | 4960 (50.5) | 2708 (27.7) | |
Asian/Pacific Islander | 83 (16.6) | 3261 (16.6) | 1069 (10.9) | 2192 (22.4) | |
Other | 4 (0.8) | 349 (1.8) | 144 (1.5) | 205 (2.1) | |
Unknown | 8 (1.6) | 300 (1.5) | 95 (1.0) | 205 (2.1) | |
Smoking statusc | |||||
No | 461 (92.2) | 17,947 (91.5) | 8929 (90.9) | 9018 (92.1) | |
Yes | 39 (7.8) | 1668 (8.5) | 894 (9.1) | 774 (7.9) | |
Parity | |||||
Nullipara | 146 (29.2) | 5690 (29.0) | 2671 (27.2) | 3019 (30.8) | |
Multipara | 259 (51.8) | 10,444 (53.2) | 5214 (53.1) | 5230 (53.4) | |
Missing/unavailable | 95 (19.0) | 3481 (17.7) | 1938 (19.7) | 1543 (15.8) | |
Family household incomed (US $) | |||||
<$30,000 | 31 (6.2) | 1092 (5.6) | 584 (5.9) | 508 (5.2) | |
$30,000-$49,999 | 117 (23.4) | 4863 (24.8) | 2806 (28.6) | 2057 (21.0) | |
$50,000-$69,999 | 147 (29.4) | 5474 (27.9) | 2913 (29.7) | 2561 (26.2) | |
$70,000-$89,999 | 104 (20.8) | 4131 (21.1) | 1969 (20.0) | 2162 (22.1) | |
≥$90,000 | 101 (20.2) | 4033 (20.6) | 1535 (15.6) | 2498 (25.5) |
aKPSC: Kaiser Permanente Southern California.
bKPNC: Kaiser Permanente Northern California.
cSmoking status documented within the year prior to the index date.
dMedian family household income based on census tract of residence.
Chart review demonstrated that 334 (66.8%) of the 500 cases were true ectopic pregnancies. The sensitivity, specificity, PPV, and NPV of using the Scholes et al algorithm and the enhanced algorithm for identifying EPs are presented in
. The sensitivity, specificity, NPV, and PPV for the Scholes et al algorithm were lower at 94.3% (315/334), 84.3% (140/166), 88.1% (140/159), and 92.4% (315/341), respectively, compared to those for the enhanced algorithm at 97.6% (326/334), 84.9% (141/166), 94.6% (141/149), and 92.9% (326/351), respectively. Furthermore, the overall performance (Youden index and F-score) of the enhanced algorithm was higher than the performance of the Scholes et al algorithm at 82.5 and 95.2 versus 78.7 and 93.3, respectively.Characteristic | Scholes et al algorithm | Enhanced algorithm | ||||||||||
Yes | No | Total | Yes | No | Total | |||||||
Classification by chart review, n | ||||||||||||
Yes | 315 | 19 | 334 | 326 | 8 | 334 | ||||||
No | 26 | 140 | 166 | 25 | 141 | 166 | ||||||
Total | 341 | 159 | 500 | 351 | 149 | 500 | ||||||
Test characteristics | ||||||||||||
Sensitivity, % (n/N) | N/Aa | N/A | 94.3 (315/334) | N/A | N/A | 97.6 (326/334) | ||||||
Specificity, % (n/N) | N/A | N/A | 84.3 (140/166) | N/A | N/A | 84.9 (141/166) | ||||||
Negative predictive value, % (n/N) | N/A | N/A | 88.1 (140/159) | N/A | N/A | 94.6 (141/149) | ||||||
Positive predictive value, % (n/N) | N/A | N/A | 92.4 (315/341) | N/A | N/A | 92.9 (326/351) | ||||||
Youden index | N/A | N/A | 78.6 | N/A | N/A | 82.5 | ||||||
F-score | N/A | N/A | 93.3 | N/A | N/A | 95.2 |
aN/A: not applicable.
We evaluated the performance of electronic abstraction in correctly identifying EP management type in the 326 EP cases identified by both the chart review and the enhanced algorithm. Chart review revealed that 197 (60.4%) were managed surgically, 126 (38.7%) were managed medically, and 3 (0.9%) could not be classified. Electronic abstraction assigned 186 (57.1%) EP cases as managed surgically and 124 (38.0%) as managed medically, and 16 (4.9%) could not be classified. The performance of electronic chart abstraction in assigning EP management compared to that of chart review is provided in
. The sensitivity of surgical procedure codes from electronic chart abstraction to correctly identify surgical management was 91.9% (181/197). The overall accuracy, defined as the percentage of EP cases with correct management (surgical, medical, and unclassified) identified by electronic chart abstraction, was 92.3% (301/326). An excellent level of agreement in EP case identification (κ=0.93, 95% CI 0.89-0.96) was observed between the Scholes et al algorithm and the enhanced algorithm.Characteristic | Classification by electronic abstractiona | ||||
Surgical | Medical | Unclassified | Total | ||
Classification by chart review, n | |||||
Surgical | 181 | 5 | 11 | 197 | |
Medical | 5 | 118 | 3 | 126 | |
Unclassified | 0 | 1 | 2 | 3 | |
Total | 186 | 124 | 16 | 326a | |
Test characteristics | |||||
Sensitivity, % (n/N) | 91.9 (181/197) | N/Ab | N/A | N/A | |
Specificity, % (n/N) | 96.1 (124/129) | N/A | N/A | N/A | |
Negative predictive value, % (n/N) | 88.6 (124/140) | N/A | N/A | N/A | |
Positive predictive value, % (n/N) | 97.3 (181/186) | N/A | N/A | N/A | |
Youden index | 88 | N/A | N/A | N/A | |
F-score | 94.5 | N/A | N/A | N/A | |
Overall accuracyc, % (n/N) | N/A | N/A | N/A | 92.3 (301/326) |
aIncludes cases confirmed as ectopic pregnancy by chart review and the enhanced algorithm.
bN/A: not applicable.
cThe percentage of ectopic pregnancy cases with correct management (surgical, medical, and unclassified) identified by electronic chart abstraction.
Sensitivity analysis limiting data to the subset of cases (n=307) from 2009 to 2014 with ICD-9–only codes revealed that the sensitivity and NPV for the Scholes et al subset analysis, at 94.5% (206/218) and 85.9% (73/85), respectively (
), were similar to 94.3% (315/334) and 88.1% (140/159), respectively, for the Scholes et al full data set ( ). The performance of the enhanced algorithm in the subset analyses (sensitivity of 97.2%, 212/218; NPV of 92.4%, 73/79) was also similar to the performance of the enhanced algorithm for the full data set (sensitivity of 97.6%, 326/334; NPV of 94.6%, 141/149).Characteristic | Scholes et al algorithm | Enhanced algorithm | ||||||
Yes | No | Total | Yes | No | Total | |||
Classification by chart review, n | ||||||||
Yes | 206 | 12 | 218 | 212 | 6 | 218 | ||
No | 16 | 73 | 89 | 16 | 73 | 89 | ||
Total | 222 | 85 | 307 | 228 | 79 | 307 | ||
Test characteristics | ||||||||
Sensitivity, % (n/N) | N/Aa | N/A | 94.5 (206/218) | N/A | N/A | 97.2 (212/218) | ||
Specificity, % (n/N) | N/A | N/A | 82.0 (73/89) | N/A | N/A | 82.0 (73/89) | ||
Negative predictive value, % (n/N) | N/A | N/A | 85.9 (73/85) | N/A | N/A | 92.4 (73/79) | ||
Positive predictive value, % (n/N) | N/A | N/A | 92.8 (206/222) | N/A | N/A | 93.0 (212/228) | ||
Youden index | N/A | N/A | 76.5 | N/A | N/A | 79.3 | ||
F-score | N/A | N/A | 93.6 | N/A | N/A | 95.1 |
aN/A: not applicable.
Discussion
In this validation study of EP, we found that our enhanced version of an algorithm that was previously validated by Scholes et al [
] in 2011 for identification of EP had a slightly higher sensitivity of 97.6% and negative predictive value of 94.6% compared to the original algorithm. The overall test performance, as estimated by the Youden index and F-score, was also much higher for the enhanced algorithm. However, we found similar specificities and PPVs in both the enhanced and Scholes et al algorithms. Furthermore, limiting the test performance to the pre-EHR era, the period when ICD-9 was used to code and classify medical conditions (2009-2014), the enhanced algorithm yielded a higher sensitivity, NPV, and overall test performance in EP case identification, suggesting that differences are due to improvement in clinical information collection and retrieval rather than any ICD code changes (from ICD-9 to ICD-10).The quality of data extracted from outpatient encounters and hospital discharge records has been well studied. The accuracy of data abstraction varies by health care system, coding and clinical practice, and design of EHR query modules, among others [
]. For example, in a fee-for-service setting, in-person visits may be the primary mode of care; however, in a capitated care model, telephone encounters, which are not billable but allow providers to speak directly with patients who may be at home or another convenient location, may be used more frequently. These appointments usually last about 20 minutes and do not require a copay. Although an efficient option that helps patients avoid unnecessary in-person doctor visits, the usefulness and quality of data extracted from TAVs has not been well studied. We evaluated the performance of our enhanced algorithm after including TAV in the algorithm and found that accuracy improved when TAV EP codes were used in combination with EP codes from in-person encounters ( ).Scholes et al developed the original algorithm using a classification and regression tree (CART) [
]. The CART model is a nonparametric classification technique for building decision trees in which results are presented in a useful and easy-to-interpret “tree” format. However, it does not generate prediction probabilities needed to assess calibration. Model discriminatory accuracy is typically assessed. We made minor modifications to the algorithm to incorporate equivalent ICD-10 diagnostic and procedure codes and took into account other coding differences unique to the current EHRs (ie, new medication codes) and clinical practice (ie, increasing use of TAVs). Therefore, our enhanced algorithm is updated to a more current health care setting, has high PPV for case identification, and will support contemporary observational studies with validated accuracy. Since our enhanced algorithm had a higher PPV than the Scholes et al algorithm and the agreement with the Scholes algorithm was high, we did not perform a new CART analysis.Accurate case identification using the enhanced algorithm is feasible and increasingly useful for public health disease surveillance and epidemiological studies. Furthermore, early identification of high-risk women may provide better opportunities for early detection of EP in affected women.
The overall accuracy of electronic data abstraction to identify surgical management of EP was 92.3%. Although we demonstrated a high overall accuracy using surgical codes (
), consideration should also be given to using additional surgical codes for tubal surgery that were not included in the case-finding algorithm because they were not EP-related codes but may be used by some providers at the time of EP surgery in order to increase the accuracy of management assignment.This study has strengths and limitations. The socioeconomically diverse patient population at KPNC and KPSC, which is broadly representative of California, makes our findings widely generalizable to health systems with similar clinical patterns (ie, closed health care systems). However, future research is needed to examine whether the enhanced algorithm can be applied in other settings. The validation of the enhanced algorithm based on EHRs during the time periods both prior to and subsequent to EHR implementation further enhances the strength of this study. While we attempted to identify all potential EP cases by using cases with either an EP-related diagnostic or procedure code, it is possible that EP cases that were incorrectly or not coded were not captured, which would have falsely increased the sensitivity of both algorithms. We did not adjust for the influence of baseline characteristics. Therefore, some caution in interpreting the findings is warranted.
The enhanced algorithm yielded better overall EP case identification test results from EHR data, with slight improvements in sensitivity, specificity, and predictive values compared to the algorithm developed using pre-EHR era data, suggesting that the accuracy of EP case identification can be improved by supplementing the Scholes et al algorithm with TAV and ICD-10 diagnosis and procedure codes from EHRs. Overall, the enhanced algorithm for EP case identification in integrated health care databases is adequate to allow for its use in future epidemiological studies. Further studies on the quality of EHRs geared toward specific prenatal outcomes are urgently needed.
Acknowledgments
This study was funded by Bayer AG. We appreciate the contributions of the Kaiser Permanente members who provided their electronic health record information to this study.
Conflicts of Interest
DG is the Principal Investigator at the KPSC site. DG reports grants from Bayer AG during the conduct of the study and grants from Centers for Disease Control and Prevention and the US National Institutes of Health/National Institute of Child Health and Human Development (NIH/NICHD) outside the submitted work. MJF is a coinvestigator at the KPSC site. MJF reports grants from Bayer AG during the conduct of the study and grants from NIH/NICHD outside the submitted work. TRB is the Principal Investigator at the KPNC site and employed by KPNC. She reports grants from Bayer AG during the conduct of the study and outside the submitted work. MAA is a coinvestigator at the KPNC site and employed by KPNC. She also reports grants from Bayer AG during the conduct of the study and outside the submitted work. AA is an employee of Bayer AG, the sponsoring company of this study. AA reports stocks from Bayer AG. SA and MC are employed by KPNC and report grants from Bayer AG during the conduct of the study. JMS, VYC, FX, TMI, JS, and HST are employed by KPSC and report grants from Bayer AG during the conduct of the study. The opinions expressed are solely the responsibility of the authors and do not necessarily reflect the official views of the funding agency.
International Classification of Diseases diagnostic and procedure codes (ICD-9 and ICD-10), Current Procedural Terminology (CPT-4) codes for ectopic pregnancy in the enhanced algorithm*.
DOCX File , 13 KB
Ectopic pregnancy ascertainment in telephone appointment visits - performance of electronic data abstraction.
DOCX File , 13 KBReferences
- Gavrielov-Yusim N, Friger M. Use of administrative medical databases in population-based research. J Epidemiol Community Health 2014 Mar;68(3):283-287. [CrossRef] [Medline]
- Loffer FD. Outpatient management of ectopic pregnancies. Am J Obstet Gynecol 1987 Jun;156(6):1467-1472. [CrossRef] [Medline]
- Ory SJ. New options for diagnosis and treatment of ectopic pregnancy. JAMA 1992;267(4):534-537. [Medline]
- Scholes D, Yu O, Raebel MA, Trabert B, Holt VL. Improving automated case finding for ectopic pregnancy using a classification algorithm. Hum Reprod 2011 Nov;26(11):3163-3168 [FREE Full text] [CrossRef] [Medline]
- Steib SA, Reichley RM, McMullin ST, Marrs KA, Bailey TC, Dunagan WC, et al. Supporting ad-hoc queries in an integrated clinical database. Proc Annu Symp Comput Appl Med Care 1995:62-66 [FREE Full text] [Medline]
- Johnson SB, Hripcsak G, Chen J, Clayton P. Accessing the Columbia Clinical Repository. Proc Annu Symp Comput Appl Med Care 1994:281-285 [FREE Full text] [Medline]
- Byar DP. Problems with using observational databases to compare treatments. Stat Med 1991 Apr;10(4):663-666. [CrossRef] [Medline]
- McDonald CJ, Hui SL. The analysis of humongous databases: problems and promises. Stat Med 1991 Apr;10(4):511-518. [CrossRef] [Medline]
- Garg AX, Adhikari NKJ, McDonald H, Rosas-Arellano MP, Devereaux PJ, Beyene J, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA 2005 Mar 9;293(10):1223-1238. [CrossRef] [Medline]
- Kemper AR, Uren RL, Clark SJ. Adoption of electronic health records in primary care pediatric practices. Pediatrics 2006 Jul;118(1):e20-e24. [CrossRef] [Medline]
- D'Avolio LW. Electronic medical records at a crossroads: impetus for change or missed opportunity? JAMA 2009 Sep 09;302(10):1109-1111. [CrossRef] [Medline]
- Dunn MJ. Benefits of electronic medical records outweigh every challenge. WMJ 2007 May;106(3):159-160 [FREE Full text] [Medline]
- Weiner MG, Embi PJ. Toward reuse of clinical data for research and quality improvement: the end of the beginning? Ann Intern Med 2009 Sep 01;151(5):359-360. [CrossRef] [Medline]
- Weiskopf NG, Weng C. Methods and dimensions of electronic health record data quality assessment: enabling reuse for clinical research. J Am Med Inform Assoc 2013 Jan 1;20(1):144-151 [FREE Full text] [CrossRef] [Medline]
- Gallego B, Dunn AG, Coiera E. Role of electronic health records in comparative effectiveness research. J Comp Eff Res 2013 Nov;2(6):529-532. [CrossRef] [Medline]
- Anthony MS, Armstrong MA, Getahun D, Scholes D, Gatz J, Schulze-Rath R, et al. Identification and validation of uterine perforation, intrauterine device expulsion, and breastfeeding in four health care systems with electronic health records. Clin Epidemiol 2019;11:635-643 [FREE Full text] [CrossRef] [Medline]
- Smith N, Iyer RL, Langer-Gould A, Getahun DT, Strickland D, Jacobsen SJ, et al. Health plan administrative records versus birth certificate records: quality of race and ethnicity information in children. BMC Health Serv Res 2010 Nov 23;10:316 [FREE Full text] [CrossRef] [Medline]
- Andrade SE, Scott PE, Davis RL, Li D, Getahun D, Cheetham TC, et al. Validity of health plan and birth certificate data for pregnancy research. Pharmacoepidemiol Drug Saf 2013 Jan;22(1):7-15 [FREE Full text] [CrossRef] [Medline]
- Coleman KJ, Ngor E, Reynolds K, Quinn VP, Koebnick C, Young DR, et al. Initial validation of an exercise "vital sign" in electronic medical records. Med Sci Sports Exerc 2012 Nov;44(11):2071-2076. [CrossRef] [Medline]
- Paul DW, Neely NB, Clement M, Riley I, Al-Hegelan M, Phelan M, et al. Development and validation of an electronic medical record (EMR)-based computed phenotype of HIV-1 infection. J Am Med Inform Assoc 2018 Feb 01;25(2):150-157 [FREE Full text] [CrossRef] [Medline]
- Newton KM, Peissig PL, Kho AN, Bielinski SJ, Berg RL, Choudhary V, et al. Validation of electronic medical record-based phenotyping algorithms: results and lessons learned from the eMERGE network. J Am Med Inform Assoc 2013 Jun;20(e1):e147-e154 [FREE Full text] [CrossRef] [Medline]
- Smith N, Coleman KJ, Lawrence JM, Quinn VP, Getahun D, Reynolds K, et al. Body weight and height data in electronic medical records of children. Int J Pediatr Obes 2010 May 03;5(3):237-242. [CrossRef] [Medline]
- Getahun D, Rhoads G, Fassett M, Chen W, Strauss J, Demissie K, et al. Accuracy of reporting maternal and infant perinatal service system coding and clinical utilization coding. J Med Stat Inform 2013;1:1-3 [FREE Full text] [CrossRef]
- Youden WJ. Index for rating diagnostic tests. Cancer 1950 Jan;3(1):32-35. [CrossRef] [Medline]
- Horsky J, Drucker EA, Ramelson HZ. Accuracy and Completeness of Clinical Coding Using ICD-10 for Ambulatory Visits. AMIA Annu Symp Proc 2017;2017:912-920 [FREE Full text] [Medline]
- Breiman L, Friedman JH, Stone CJ, Olshen RA. Classification and Regression Trees. Belmont, CA: Taylor and Francis; 1984.
Abbreviations
CART: classification and regression tree |
EHRs: electronic health records |
EP: ectopic pregnancy |
ICD-9: International Classification of Diseases, Ninth Revision |
ICD-10: International Classification of Diseases, Tenth Revision |
KPNC: Kaiser Permanente Northern California |
KPSC: Kaiser Permanente Southern California |
NIH/NICHD: National Institutes of Health/National Institute of Child Health and Human Development |
NPV: negative predictive value |
PPV: positive predictive value |
TAVs: telephone appointment visits |
Edited by G Eysenbach; submitted 04.03.20; peer-reviewed by M Bannick, R Bajpai; comments to author 12.06.20; revised version received 23.07.20; accepted 30.10.20; published 30.11.20
Copyright©Darios Getahun, Jiaxiao M Shi, Malini Chandra, Michael J Fassett, Stacey Alexeeff, Theresa M Im, Vicki Y Chiu, Mary Anne Armstrong, Fagen Xie, Julie Stern, Harpreet S Takhar, Alex Asiimwe, Tina Raine-Bennett. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 30.11.2020.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on http://medinform.jmir.org/, as well as this copyright and license information must be included.