Assessing Hospital Performance After Percutaneous Coronary Intervention Using Big Data
Background—Although risk adjustment remains a cornerstone for comparing outcomes across hospitals, optimal strategies continue to evolve in the presence of many confounders. We compared conventional regression-based model to approaches particularly suited to leveraging big data.
Methods and Results—We assessed hospital all-cause 30-day excess mortality risk among 8952 adults undergoing percutaneous coronary intervention between October 1, 2011, and September 30, 2012, in 24 Massachusetts hospitals using clinical registry data linked with billing data. We compared conventional logistic regression models with augmented inverse probability weighted estimators and targeted maximum likelihood estimators to generate more efficient and unbiased estimates of hospital effects. We also compared a clinically informed and a machine-learning approach to confounder selection, using elastic net penalized regression in the latter case. Hospital excess risk estimates range from −1.4% to 2.0% across methods and confounder sets. Some hospitals were consistently classified as low or as high excess mortality outliers; others changed classification depending on the method and confounder set used. Switching from the clinically selected list of 11 confounders to a full set of 225 confounders increased the estimation uncertainty by an average of 62% across methods as measured by confidence interval length. Agreement among methods ranged from fair, with a κ statistic of 0.39 (SE: 0.16), to perfect, with a κ of 1 (SE: 0.0).
Conclusions—Modern causal inference techniques should be more frequently adopted to leverage big data while minimizing bias in hospital performance assessments.
WHAT IS KNOWN
Good estimates of hospital quality require adjustment for the baseline sickness of treated patients (case-mix).
Because hospital profiling seeks to estimate the effect of treatment at a given hospital on outcomes, it is best formulated as a causal inference problem, requiring consideration of underlying causal assumptions and methods designed for causal inference.
Clinical registries and billing data can provide rich case-mix information, but most variables are often ignored in favor of a small subset deemed to be clinically relevant a priori.
WHAT THE STUDY ADDS
Leveraging the case-mix information in both clinical registries and billing data using penalized regression methods may alleviate unmeasured confounding and provide better estimates of hospital quality.
Modern causal inference approaches like targeted maximum likelihood combine models of mortality with estimates of treatment hospital (propensity scores) to provide more accurate and efficient estimates of hospital quality.
Numerous governmental and professional organizations rely on quality-based performance measures for public reporting and quality improvement.1–9 Since June 2008, the Centers for Medicare and Medicaid Services has reported hospital-specific risk-adjusted 30-day mortality for acute myocardial infarction, heart failure, and pneumonia9 and since 2012 for 30-day all-cause readmission. The most common and persistent criticism of hospital assessments is the inadequacy of risk adjustment—a concern that the statistical model does not capture true patient sickness (case-mix), and thus, patient presentations confound differences in hospital outcomes.10–12 In the presence of case-mix confounding, a hospital treating especially sick patients would have a higher rate of adverse outcomes regardless of the true hospital quality. Clinical registries, databases that contain hundreds and sometimes thousands of variables, have been increasingly used to mitigate inadequate risk adjustment for hospital assessments.1–3,5 Despite these efforts, unadjusted case-mix differences remain a concern. The unease is, in part, because of the a result of including of relatively few confounders in the risk adjustment model because of the simultaneous problems of small numbers of patients per institution and low event rates. Modern approaches that exploit many case-mix confounders while imposing a causal framework are likely to improve the accuracy and enrich the interpretation of hospital comparisons.
The adequacy of risk adjustment is not the only concern with hospital assessments. The virtues of 30-day versus in-hospital outcome assessments have been discussed13–16 with most recommending the 30-day outcome. The choice of fixed versus random effects to represent hospitals has also received considerable attention.17–21 Regardless of timing of the end point or how hospital effects are accounted for, parametric regression models are typically implemented to adjust either directly or indirectly for hospital case-mix differences.
With increased emphasis on ensuring that patient differences do not confound estimates of hospital effects, it is surprising that so few hospital assessments have adopted a causal inference framework, wherein researchers focus on estimating population-level differences that would have occurred had all subjects been exposed to a certain intervention—in our setting, to a particular hospital. In causal inference, researchers attempt to adjust for all confounders to meet the eponymous no unmeasured confounding assumption, although this assumption is typically not met in practice because important confounding variables do not exist in the data. A few articles have explicitly framed profiling as a causal inference problem and used propensity scores to balance populations across hospitals.10,19,22 Modern statistical techniques optimized for causal inference have been introduced in methodology articles but have rarely been implemented for profiling. Big data sets contain many potential confounders, but important predictive variables may be hidden among many noisy variables. Machine learning for big data has been used frequently in genetics research, but these tools require tuning and may be unfamiliar to outcomes researchers. Exploiting big data sets to optimize risk adjustment and gain new insights into risk factors and hospital quality has thus become both a promising opportunity and an important statistical challenge.
In this article, we characterize the advantages of using modern machine-learning algorithms within a causal inference framework to assess hospital performance, contrasting findings to current, common approaches. The new approaches are specifically designed for causal inference and capitalize on the potential of big data sets to provide new insights. We use a state-mandated clinical registry cohort of patients undergoing percutaneous coronary intervention (PCI) linked to routinely collected billing, giving us access to hundreds of variables measured on thousands of patients treated at Massachusetts hospitals with 30-day all-cause mortality after PCI as an outcome.
We make use of 4 separate data sources. The first is a state-mandated clinical registry co-ordinated by the Massachusetts Data Analysis Center (Mass-DAC).5 The data are collected prospectively by trained hospital personnel who use the American College of Cardiology’s National Cardiovascular Data Registry’s instrument23 supplemented with detailed patient- and physician-identifying information for quality assessment. Data are harvested quarterly and adjudicated annually through medical record review using a panel of clinicians and data managers. Mass-DAC links the registry data to The Massachusetts Acute Hospital Case-Mix billing data comprising the Inpatient Discharge Database from the Massachusetts Center for Health Information and Analysis. Information is linked using criteria based on combinations of treatment hospital, medical record number, admission or discharge date, and date of birth.24 The Inpatient Discharge Database includes ≤15 present on admission diagnoses, 15 discharge diagnoses, and a further 15 procedure codes based on the International Classification of Diseases, NinthRevision, Clinical Modification system. Because some PCIs can be performed in outpatient clinics located in hospitals, our third data source is the Outpatient Observation Database, also part of the Massachusetts Acute Hospital Case-Mix database maintained by the Massachusetts Center for Health Information and Analysis. The outpatient data contains one principal diagnosis and ≤5 additional diagnoses. Henceforth, these 2 data sources are collectively referred to as billing data. To ensure completeness of mortality information, the Mass-DAC cohort is linked to the Massachusetts Registry of Vital Records and Statistics. Massachusetts Registry of Vital Records and Statistics personnel return merged results files to Mass-DAC based on 3 criteria sets: (1) social security number only, (2) date of birth and first 3 letters of last and first name, or (3) first 3 letters of first name and full last name.25
Patients, Hospitals, and Confounders
We included adults aged ≥18 years undergoing PCI in all nonfederal Massachusetts’ hospitals between October 1, 2011, and September 30, 2012. We excluded patients who resided outside Massachusetts (to ensure completeness of 30-day follow-up) and those who were deemed to be of exceptional risk, defined as patients having high-risk features not captured by any variable in the data or cases where PCI offered the best or only option for improving the chance of survival. Cases submitted as exceptional risk were reviewed for exclusion by an independent committee.
Patients were assigned to the hospital in which they had their first PCI procedure within a 30-day window. Because patients can undergo ≥1 PCI during the hospitalization, we analyzed only the first or index PCI during the hospitalization. Patients could contribute ≥1 PCI admission to our study; however, their second PCI hospitalization had to be >30 days after their index PCI.
The Mass-DAC registry holds 329 variables per patient that are captured by hospital personnel using the American College of Cardiology’s National Cardiovascular Data Registry’s data collection tool. Because we should only adjust for confounders, that is, variables that influence both the outcomes and hospital selection, variables not fitting this criterion such as those recorded during or after the index PCI were excluded from our analysis. This resulted in 75 clinical variables from the registry. We gathered more confounders when we linked with billing data by including the 150 most frequently recorded present on admission diagnoses, thus bringing our final count of confounders to 225. Variables in the Mass-DAC data associated with missingness were identified and filled in using regression chained imputation implemented in the SAS callable program IVEware.26
Primary Outcome Measure
The patient end point was all-cause mortality 30 days from the index PCI. The primary hospital outcome was excess mortality defined as the difference between the directly standardized mortality at a hospital and the average of the directly standardized mortality across all hospitals. Positive excess mortality rates suggest that the hospital is performing poorly compared with other hospitals in the state, whereas negative rates indicate that the hospital is performing well.
We adopted a total of 6 different approaches for assessing hospital performance. The approaches differed along 2 factors: how the confounders were selected for inclusion and how the casual effect of the hospital was estimated. For confounder selection, we used either a small, clinically determined subset of 11 variables27 or a full set of 225 available confounders. Logistic regression with an elastic net penalty was used to adjust for the larger set of confounders. This approach begins by assuming that the association of each variable on mortality or hospital selection is zero or near zero before making the data prove the worth of each variable. The strength of this assumption is encoded as a tuning parameter that we set to maximize performance using cross-validation. Typically, many variables receive coefficients of zero, effectively eliminating them from the regression.
For these 2 confounder sets, we implemented 3 methods to estimate hospital effects on mortality adjusting for patient risk: regression-only, augmented inverse probability weighting (A-IPW), and targeted maximum likelihood estimation (TMLE) approaches. For ease of exposition, we used hospital fixed effects rather than random effects to characterize hospital quality. Hospitals were included as a set of indicator variables identifying the hospital where the PCI was performed. In the regression-only approach, we estimated a logistic regression model of mortality on hospital-specific intercepts and the confounders. This is a commonly used method and served as our standard of comparison. A-IPW seeks to improve on the regression-only approach by combining regression with propensity scores—in our setting, estimates of the probability of undergoing PCI at a particular hospital. It has a double-robust property, yielding unbiased estimates if either the mortality regression or propensity scores are properly specified.19,28,29 We estimated multinomial regressions using generalized logits to produce 24 propensity scores (one for each hospital) for every patient in the Mass-DAC database. The inverses of the propensity scores were used as weights in the A-IPW estimator, augmenting the outcome regression with information from the hospital regression. The TMLE approach also produces a doubly robust estimator that combines a mortality and hospital regression using a more flexible algorithm that guarantees additional desirable statistical properties.30,31 TMLE updates the initial mortality regression using the predicted propensity scores in a statistically optimal way to reduce bias in the estimate of hospital quality. For comparison purposes, we used the same fixed-effects mortality regression for all 3 methods and the same multinomial propensity score regression in the A-IPW and TMLE estimators. The Appendix in the Data Supplement provides further details of the A-IPW and TMLE algorithms and our use of penalized regression.
Excess Hospital Mortality
Our mortality regressions yielded estimates of adjusted confounder and hospital effects, both on the log-odds (logit) scale. We multiplied each patient’s baseline variables by the estimated confounder coefficients, summed them, and then added the estimated intercept associated with a given hospital. This yielded an estimate of the log-odds of mortality for each patient in the state had they been treated at the hospital. We then inverted the log-odds to get probability of mortality for each patient and averaged to get the overall risk-adjusted mortality at that hospital. This was repeated for each hospital before subtracting the mean of these 24 hospital estimates from each to get the excess mortality at each hospital. To quantify the uncertainty in our estimates, we used bias-corrected bootstrap resampling to generate confidence intervals,32 resampling hospitals with replacement and using all admissions from the sampled hospital. These confidence intervals were Bonferonni adjusted to account for multiple comparisons so that the set of confidence intervals for the 24 hospitals had 95% confidence. All estimators were implemented using R software including the packages tmle and glmnet.33,34
We compared findings using 4 different summaries. First, for each hospital, we graphically compared the propensity scores generated for all patients by the clinically informed confounders with those generated by the elastic net. Regressions producing a wider range of propensity scores are generally preferred. Second, we classified hospitals into 3 categories using each method and set of confounders: high mortality (if the lower limit of the confidence interval for excess mortality was above 0), expected mortality (if the interval included 0), and low mortality (if the upper limit of the interval was below 0) hospitals. We then compared the similarity of the approaches using κ statistics, with a κ statistic >0.8 generally indicating high agreement between 2 classifiers.35 We concluded that pairs of modeling approaches classified hospitals differently if their pairwise κ statistic was 2 SEs <0.8. We used the R package psych to calculate κ statistics and SEs.36 Finally, we computed the total lengths of the confidence intervals noting that shorter intervals are generally preferred.
Exclusions and Missing Data
A total of 12 554 PCI admissions were observed in Massachusetts between October 1, 2011, and September 30, 2012, of which 11 114 remained after removing exceptional risk cases, non-Massachusetts residents, and multiple admissions within a 30-day period. Of these, 9389 admissions (75%) merged with the Center for Health Information and Analysis billing data. The 30-day mortality rates of hospitalizations retained and those excluded did not differ. Across hospitals, missingness because of unmerged data ranged from <1% of PCI admissions to 31% of PCI admissions. Finally, one hospital with no mortalities was eliminated because estimation was not possible with fixed-effects regression. Thus, the final cohort included 9325 PCI hospitalizations for 8952 unique patients across 24 hospitals.
We also dealt with a small amount of missing data in the Mass-DAC registry using imputation. In particular, the stenosis percentage fields had significant amounts of missingness (2.4% to 3.7% missing). A few other fields had trace amounts of missing cells (<0.1%). Missing data, imputation, and the sensitivity of our results to inclusion of multiple patient admissions are discussed further in the Appendix in the Data Supplement.
Unadjusted Mortality and Case-Mix
The unadjusted all-cause 30-day mortality rate across 24 hospitals is 2.0% for patients undergoing PCI, corresponding to 188 deaths out of 9325 admissions. Hospital unadjusted mortality ranged from 0.6% to 5.6%. Substantial case-mix heterogeneity among hospitals for the clinical confounders exists (Figure 1). In some cases, the differences are large. For example, emergent or salvage PCI admissions ranged from 15% to 100% with a mean of 35%. Likewise, previous cardiac arrest ranged from 0% to 15%. The hospital differences become more apparent when examining the hospital-specific distributions of the full list of confounders from the Mass-DAC registry (Table 1).
The present on admission diagnosis codes (Table 2) indicate substantial chronic and acute coronary disease at admission, with 30-day mortality correlating with the more severe conditions. Although we retained the 150 most frequent diagnoses, fewer than 10 patients had no present on admission diagnosis, with a median diagnosis count of 6 per patient and a maximum of 15.
Probability of PCI Admission at Each Massachusetts Hospital
Figure 2 shows the distribution of propensity scores for each hospital across the entire patient population, stratified by confounder set. Patients with negative log-odds propensity scores are unlikely to be treated at a given hospital, whereas those with scores closer to or greater than zero are more likely to be treated at the hospital. The propensity scores estimated from clinical variables displayed in blue are narrow and centered near −3, corresponding to the baseline probability of ≈1/24 for treatment at a given hospital. The propensity scores (orange dashed line) estimated using the full 225 variables discriminate better, placing the bulk of patients to the left of the clinically estimated scores (lower probability of treatment) and a few to the right (higher probability). The richer confounder set leads to more extreme propensity scores and consequently weights ≤1 600 000 (corresponding to a propensity score much <0.001). High weights can indicate a problem because patients with low propensity scores for a given hospital cannot be compared with a similar patient who underwent PCI there.
Excess Hospital Risk
Analyses of hospital-specific excess mortality risk indicate differences related both to approach and to choice of confounders (Figure 3). Comparing the top to the bottom panels, all methods classify hospital B as an outlier under the clinical confounder set but not under the full set. The full set of confounders accounts for the extra risk in hospital B’s patient population not captured by the clinically selected set, a discrepancy also visible in the propensity score densities for hospital B (Figure 2). Hospital R’s classification also benefits from the inclusion of more confounders. On the contrary, hospital J is not an outlier under any method with clinical confounders but is a high mortality outlier under all methods when the full confounders are used. Hospitals E and T are classified as having higher than expected mortality outliers across methods and confounder sets.
Figure 4 presents the estimated regression coefficients for the full set of confounders. Although all confounders are considered, the elastic net penalty selected 64 nonzero confounders, providing a much richer set of variables for risk adjustment than the 11 preselected for the clinical confounder set. A similar coefficient plot for the clinical confounders appears in the Appendix in the Data Supplement. In addition to the impact of the confounder set, some differences in conclusions arise when changing the approach used to estimate excess hospital mortality (Table 3). In the clinical set, TMLE classifies hospitals Q, R, S, and G as as-expected hospitals, whereas regression-only and A-IPW classify all 4 as outliers. Both TMLE and A-IPW classify hospital H as having lower than expected mortality, whereas regression-only does not. In the full confounder set, TMLE is the only method that does not classify hospital I as an outlier.
Fewer hospitals are selected as outliers when using the full set of confounders compared with the clinically selected set. This is due, at least in part, to increased uncertainty in the hospital estimates as reflected in the wider confidence intervals of the full confounder set. On average, the regression-only, A-IPW, and TMLE interval estimates using the full confounder set are respectively 65%, 70%, and 51% longer compared with estimates using the clinically selected confounders.
Agreement Among Approaches
The least agreement and hence the largest hospital classification differences occur between the regression-only approach with clinical confounders and the approaches that used the full confounder set, particularly TMLE with full confounders (Table 4). In contrast, the more sophisticated methods vary less when used across confounder sets. Hospital classifications obtained from A-IPW with clinical confounders did not agree with TMLE using full confounders. When the full confounder set was used, no significant differences in hospital classification were observed across methods.
Increased access to registry data linked with other data sources presents several opportunities to outcomes researchers, including the ability to better risk adjust case-mix when assessing hospital performance. We introduced methods for direct standardization, which answer the question, “what is the mortality that would be observed if every patient in the state were treated at this hospital?” In contrast, indirect standardization seeks to determine the expected mortality for each hospitals’ patient population that would be observed if they were treated at a hypothetical average hospital.10,19 Often, indirect standardization answers the more pertinent policy question, as hospitals will by necessity treat certain subsets of the population. Directly standardized outcomes may be of more interest to consumers, who can potentially choose where to receive care and would like to compare different hospitals.
We used a penalized regression approach to include more confounders, which led to wider confidence intervals. Wider intervals do not imply that the parsimonious subset is giving better estimates. In fact, when using a smaller set of confounders, the researcher implicitly assumes that the excluded variables are not true confounders, and this previous knowledge is not based on the observed data. On the contrary, using a penalized regression method, such as the elastic net, explicitly considers each potential confounder and allows the data to determine which confounders contain important information. Wider confidence intervals reflect the fact that our estimates are truly more uncertain than a parsimonious confounder subset would have us think. Penalized regression leverages large data sets to account for residual confounding in a way that is simply not possible with standard logistic models. Of course, data-driven variable selection does not negate the importance of subject-matter knowledge, and penalized regression should be used in conjunction with a generous set of clinically relevant variables to determine key variables from a large set of candidates.
We introduced A-IPW and TMLE as alternative approaches to the standard regression-only approach for risk adjustment. Both are double-robust estimators and involve estimation of a propensity score regression in addition to a mortality outcome regression and are unbiased if either regression is consistently estimated. Despite this theoretical nicety, A-IPW yielded wider confidence intervals than our other methods. This finding is supported by theoretical and empirical studies.19,30 Despite using the same hospital multinomial regression model, TMLE gave more stable results compared with A-IPW and is theoretically formulated to minimize bias compared with the standard regression-only model.30
An alternative approach to causal inference is the use of instrumental variables, variables associated with treatment and only with outcome by way of treatment. A canonical instrumental variable in profiling is a patient’s distance to a hospital.37 Ultimately, the choice of approach comes down to data availability—good instrumental variables may avoid the problem of unmeasured confounding but require strong assumptions about the underlying causal mechanism. Our approaches instead mitigate confounding by explicitly considering many variables for risk adjustment and are particularly useful when many confounders are available, as with registry and billing data.
Kappa statistics also support the use of the approaches we proposed in the sense that the larger confounder set gave more consistent results across methods, and the more advanced methods gave more consistent results across confounder sets. Our estimates of risk-adjusted hospital mortality became more similar when we dialed up the sophistication of our approach, theoretically converging on the true values as we took steps to minimize bias in our confounder selection and parameter estimation.
No approach is without limitations. We implemented a fixed-effects framework for hospital effects to simplify our exposition and isolate the performance of the methods we introduced with respect to standard practice. A limitation of fixed-effects models is the inability to estimate adjusted mortality for hospitals with no mortalities and even sometimes with few mortalities. As a result of this, we were forced to drop one of the hospitals in our original data set. A random-effects framework would assert that hospital effects are related by a common distribution such as a normal (bell curve), allowing information to be shared between hospitals, stabilizing estimates, and often reducing the number of classified outliers.18 Moreover, we used single-based imputation—multiple imputation strategies would fully account for the additional uncertainty in the imputation itself. For our goals of comparative assessments between and among approaches, conclusions would be unlikely to change if we used multiple imputation strategies. Additionally, we eliminated patients from study who did not link with the Massachusetts Center for Health Information and Analysis billing data from which we drew diagnosis codes. Although we found no difference in mortality, we did find that the success of linkages across hospitals differed, perhaps based on systematic features of the hospitals. These discrepancies are likely the result of inconsistent data collection and reporting procedures at the hospital level and can present obstacles to good statistical inference if they become severe.
Finally, as in all causal inference, the possibility of residual confounding is a concern and could arise from uncollected patient measures. However, considering all 225 potential confounders for adjustment strengthens our confidence that residual confounding is minimized compared with our parsimonious approach that adjusts for only 11 risk factors.
Working in a causal inference framework with modern statistical techniques and including substantially more confounders can yield improvements over standard risk adjustment strategies. Better estimates of underlying hospital quality were obtained by including these additional confounders and adjusting estimates based on the propensity score. We support the use of TMLE in conjunction with penalized regression to leverage many confounders in large data sets when possible. If investigators are committed to working with small prechosen variable sets, TMLE can still be used to improve estimates. Such modifications have the potential to make substantive differences in hospital outlier classification. In the future, researchers can expand on this work by incorporating machine-learning ensembles, adopting random-effects models, or implementing fully Bayesian approaches for direct standardization.
We thank the Massachusetts Department of Public Health for permission to use the Mass-DAC registry data and the Massachusetts Center for Health Information Analysis for access to the discharge billing data sets. We also thank Caroline Wood, Department of Healthcare Policy, Harvard Medical School, for technical assistance with the preparation of this article.
Sources of Funding
Drs Rose and Normand were supported, in part, by grant GM111339 from the National Institute of General Medical Sciences, Bethesda, MD. A. Lovett and M. Cioffi were supported, in part, by a contract from the Commonwealth of Massachusetts (the Massachusetts Data Analysis Center [Mass-DAC]).
Dr Normand, A. Lovett, R. Wolf, and M. Cioffi are contracted by the Massachusetts Department of Public Health to collect, analyze, and publicly report on hospital risk-standardized mortality after PCI and after cardiac surgery at all nonfederal Massachusetts hospitals. The other authors report no conflicts.
The editors had no role in the evaluation or in the decision about the acceptance of this article.
This article was handled independently by Andrew J. Epstein, PhD, MPP as a Guest Editor. The editors had no role in the evaluation or in the decision about its acceptance.
The Data Supplement is available at http://circoutcomes.ahajournals.org/lookup/suppl/doi:10.1161/CIRCOUTCOMES.116.002826/-/DC1.
- Received March 4, 2016.
- Accepted July 26, 2016.
- © 2016 American Heart Association, Inc.
- 1.↵The Society of Thoracic Surgeons. Quality Performance Measures. http://www.sts.org/quality-research-patient-safety/quality/quality-performance-measures. Accessed February 18, 2016.
- 2.↵The American College of Cardiology. Quality Programs. http://www.acc.org/tools-and-practice-support/quality-programs. February 18, 2016.
- 3.↵The American College of Surgeons. American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP). https://www.facs.org/quality-programs/acs-nsqip. February 18, 2016.
- 4.↵New York State Department of Health. NYS Health Profiles. http://profiles.health.ny.gov/. February 18, 2016.
- 5.↵Massachusetts Data Analysis Center. Cardiac Study-Annual Reports. http://www.massdac.org/index.php/reports/cardiac-study-annual/. Accessed February 18, 2016.
- 6.↵California Office of Statewide Health Planning and Development. Health Care Information Division. http://oshpd.ca.gov/HID/. Accessed February 18, 2016.
- 7.↵Pennsylvania Health Care Cost Containment Council. About the Council. http://www.phc4.org/council/mission.htm. Accessed February 18, 2016.
- 8.↵State of New Jersey Department of Health. Office of Healthcare Quality Assessment Homepage. www.state.nj.us/health/healthcarequality/. Accessed February 18, 2016.
- 9.↵Centers for Medicare and Medicaid Services. Hospital Quality Initiative: Outcome Measures. https://www.cms.gov/medicare/quality-initiatives-patient-assessment-instruments/hospitalqualityinits/outcomemeasures.html. Accessed February 18, 2016.
- Shahian DM,
- Normand SL
- Krumholz HM,
- Wang Y,
- Mattera JA,
- Wang Y,
- Han LF,
- Ingber MJ,
- Roman S,
- Normand SL
- Fonarow GC,
- Pan W,
- Saver JL,
- Smith EE,
- Reeves MJ,
- Broderick JP,
- Kleindorfer DO,
- Sacco RL,
- Olson DM,
- Hernandez AF,
- Peterson ED,
- Schwamm LH
- Drye EE,
- Normand SL,
- Wang Y,
- Ross JS,
- Schreiner GC,
- Han L,
- Rapp M,
- Krumholz HM
- Normand S-LT,
- Ash AS,
- Fienberg SE,
- Stukel TA,
- Utts J,
- Louis TA
- MacKenzie TA,
- Grunkemeier GL,
- Grunwald GK,
- O’Malley AJ,
- Bohn C,
- Wu Y,
- Malenka DJ
- Varewyck M,
- Goetghebeur E,
- Eriksson M,
- Vansteelandt S
- Ash AS,
- Fienberg SE,
- Louis TA,
- Normand S-LT,
- Stukel TA,
- Utts J
- 23.↵NCDR CathPCI Registry. http://cvquality.acc.org/NCDR-Home.aspx. Accessed August 14, 2015.
- 24.↵Massachusetts Center for Health Information and Analysis. Acute Hospital Case Mix Databases. http://www.chiamass.gov/case-mix-data/. Accessed February 19, 2016.
- 25.↵Massachusetts Registry of Vital Records and Statistics. Vital Records Database. http://www.mass.gov/dph/rvrs. Accessed February 19, 2016.
- Raghunathan TE,
- Solenberger P,
- Van Hoewyk J
- 27.↵Massachusetts Data Analysis Center. Adult Percutaneous Intervention in the Commonwealth of Massachusetts Fiscal Year 2012 Report. http://www.massdac.org/wp-content/uploads/PCI-FY2012.pdf. Accessed February 19, 2016.
- van der Laan MJ,
- Robins JM
- Farrell MH
- van der Laan MJ,
- Rose S
- van der Laan MJ,
- Rubin DB
- Efron B
- Gruber S,
- van der Laan M
- Friedman J,
- Hastie T,
- Simon N,
- Tibshirani R
- Revelle W