Patterns of Hospital Performance in Acute Myocardial Infarction and Heart Failure 30-Day Mortality and Readmission
Background— In 2009, the Centers for Medicare & Medicaid Services is publicly reporting hospital-level risk-standardized 30-day mortality and readmission rates after acute myocardial infarction (AMI) and heart failure (HF). We provide patterns of hospital performance, based on these measures.
Methods and Results— We calculated the 30-day mortality and readmission rates for all Medicare fee-for-service beneficiaries ages 65 years or older with a primary diagnosis of AMI or HF, discharged between July 2005 and June 2008. We compared weighted risk-standardized mortality and readmission rates across Hospital Referral Regions and hospital structural characteristics. The median 30-day mortality rate was 16.6% for AMI (range, 10.9% to 24.9%; 25th to 75th percentile, 15.8% to 17.4%; 10th to 90th percentile, 14.7% to 18.4%) and 11.1% for HF (range, 6.6% to 19.8%; 25th to 75th percentile, 10.3% to 12.0%; 10th to 90th percentile, 9.4% to 13.1%). The median 30-day readmission rate was 19.9% for AMI (range, 15.3% to 29.4%; 25th to 75th percentile, 19.5% to 20.4%; 10th to 90th percentile, 18.8% to 21.1%) and 24.4% for HF (range, 15.9% to 34.4%; 25th to 75th percentile, 23.4% to 25.6%; 10th to 90th percentile, 22.3% to 27.0%). We observed geographic differences in performance across the country. Although there were some differences in average performance by hospital characteristics, there were high and low hospital performers among all types of hospitals.
Conclusions— In a recent 3-year period, 30-day risk-standardized mortality rates for AMI and HF varied among hospitals and across the country. The readmission rates were particularly high.
Received May 29, 2009; accepted July 2, 2009.
National hospital quality profiling efforts have recently extended from measuring processes of care to the assessment of short-term outcomes.1 The process measures target key aspects of care that are strongly linked to outcomes. However, these measures address only a narrow spectrum of all the facets of care and evaluate care in subsets of patients who meet inclusion and exclusion criteria.2 In many cases, the process measures assess care in only a minority of all the patients admitted for a specific condition.3 As a result, there is growing interest in the use of outcomes measurement, which can assess the end result of care for all patients.4 Outcome measures can provide a broad perspective on hospital performance and the information from the process measures.
The Centers for Medicare & Medicaid Services (CMS) is publicly reporting hospital rates of mortality and readmission on its Hospital Compare web site.1 Hospital Compare, part of the CMS Hospital Quality Initiative, provides the public with information on process of care, outcomes, and Medicare payment and volume for certain conditions, as well as patients’ hospital experiences. The outcome measures include 30-day mortality and 30-day readmission for acute myocardial infarction (AMI), heart failure (HF), and pneumonia. These outcome measures evaluate hospital-level performance across the country for these common medical conditions.
Readmission was chosen in addition to mortality because it is expensive to the health care system and commonly represents a preventable, adverse event for patients. Higher-quality care can reduce the risk for readmission, as has been shown for many interventions for patients with HF.5,6 Although the factors that influence readmission can extend beyond the hospitalization, the rates are assigned to the hospital that discharged the patient because of its central role in orchestrating the transition of patients from inpatient to outpatient status. The readmission rate is truly a community property, but the hospital can be a central organizing force in improving this outcome.
The aim of this report is to complement the national release of the hospital-level AMI and HF measures by providing a descriptive summary of the variation in 30-day mortality and readmission rates by region and hospital characteristics. The information presented in this report is based on the official CMS results and includes information from all hospitals that met the measures’ inclusion criteria.
WHAT IS KNOWN
The Centers for Medicare & Medicaid Services is publicly reporting risk-standardized 30-day mortality and readmission rates for US hospitals based on the experience of Medicare beneficiaries.
These measures, which are approved by the National Quality Forum, are validated by comparison with rates that are generated from models based on medical record data.
WHAT THE STUDY ADDS
The distributions of hospital risk-standardized rates reveal the patterns of hospital performance for acute myocardial infarction and heart failure in the United States.
There are regional differences in performance.
Within categories based on hospital characteristics, there is much overlap in performance.
The measures are based on Medicare fee-for-service beneficiaries ages 65 years and older discharged between July 1, 2005, and June 30, 2008, from short-term acute and critical access nonfederal hospitals. The measures include beneficiaries with a principal discharge diagnosis of AMI (International Classification of Diseases, Ninth Revision, Clinical Modification, codes 410.00, 410.01, 410.10, 410.11, 410.20, 410.21, 410.30, 410.31, 410.40, 410.41, 410.50, 410.51, 410.60, 410.61, 410.70, 410.71, 410.80, 410.81, 410.90, and 410.91) or HF (402.01, 402.11, 402.91, 404.01, 404.03, 404.11, 404.13, 404.91, 404.93, 428.0, 428.1, 428.20, 428.21, 428.22, 428.23, 428.30, 428.31, 428.32, 428.33, 428.40, 428.41, 428.42, 428.43, and 428.9).7 The mortality measures assign responsibility to the hospital that initially admitted the patient, and the readmission measures assign responsibility to the hospital that discharged the patient to a nonacute setting. The mortality measures exclude patients enrolled in the Medicare hospice program in the year before, or on the day of, admission. Both measures exclude patients who were not in fee-for-service for 1 year before admission or who were discharged against medical advice. For patients with multiple hospitalizations during the study period, the mortality measures included only 1 randomly selected hospitalization. For the readmission measures, an admission that was counted as an outcome was not defined as another index hospitalization. The AMI readmission measure does not count some planned procedures. More detailed information on the statistical methodology is available in published technical reports8 and prior publications.9–13 A sample using a 3-year period of discharges (July 2005 to June 2008) was chosen for the public measure because it increases the number of included hospitalizations, improving the precision of the estimated hospital rates.
The CMS outcome measures use Medicare administrative claims and enrollment data. Index admissions and readmissions are identified from inpatient claims, whereas inpatient, outpatient department, and physician claims in the year before the index admission are used to identify patient risk factors.
The 30-day mortality measure counts deaths for any cause, in any location, within 30 days of the hospital admission date, as identified by the date of death in CMS enrollment files or from the discharge status on the inpatient claim. The readmission measures count readmissions for any cause, to any acute care hospital caring for Medicare patients, within 30 days of discharge for those surviving to discharge. The AMI readmission measure does not count nonemergent hospitalizations for percutaneous coronary intervention and coronary artery bypass graft as readmissions.
We summarized mortality and readmission results by hospital characteristics using data from the 2007 American Hospital Association Annual Survey Database.14 Approximately 1% of the hospitals in our measure samples could not be matched to the 2007 American Hospital Association data.
Hospital Referral Regions
To summarize geographic patterns in the measures, we used the 2006 Hospital Referral Regions (HRRs) developed by the Dartmouth Atlas of Health Care Project.15 The HRRs are widely used to summarize variation in medical care inputs, utilization, and outcomes and provide a more detailed look at variation in outcomes than results at the state level.
The CMS measures use a hierarchical regression model to estimate risk-standardized mortality rates (RSMRs) or risk-standardized readmission rates (RSRRs) for each hospital. This approach reflects the assumption that a hospital-specific component of quality of care exists and that it will affect the outcomes of patients at a particular hospital.
The models account for patient risk using risk factors defined from CMS administrative data. We fitted random-effects logistic regression models using 3 years of data. The models include patient risk factors and random intercepts for each hospital. Model goodness of fit is shown in Table 1, as are the other model characteristics. Model discrimination at the patient level was quantified by c-statistics. We estimated the amount of variation explained by observed patient factors using the adjusted Cox and Snell pseudo R2 statistics. Last, using the variance of the hospital intercepts, we estimated the intrahospital correlation (IHC), which is the ratio of between-hospital variation to total variation in outcomes.
Mortality and readmission models based on administrative data were validated against models based on medical record data. The output from the administrative model was highly correlated with results from the medical record model. The measures adhere to published standards for outcome measures and were endorsed by the National Quality Forum.9–13 The models estimate the log-odds of the outcome (mortality or readmission) as a function of patient demographic and clinical risk factors found to be clinically suitable and predictive of the outcome for that condition and as a function of a random hospital-specific effect. The rates are calculated as the ratio of “predicted” to “expected” deaths or readmissions. More specifically, the expected number of deaths for each hospital is estimated using its patient mix and the average hospital-specific intercept. The predicted number of deaths for each hospital is estimated given the same patient mix but the hospital-specific intercept term. The RSMRs and RSRRs are the ratio of predicted to expected deaths or readmissions multiplied by the national unadjusted rate for the condition and outcome. To convey the uncertainty associated with this estimate, each rate is associated with a 95% interval estimate, based on bootstrapping simulation—similar to a confidence interval—that represents the range of values that include the hospitals’ true rate with 95% probability.
Weighted RSMRs and RSRRs
We calculated the mean of the RSMRs and RSRRs for hospitals in each HRR, weighting each ratio by the inverse of the variance of the hospital’s estimated rate, in which the variance is calculated from the bootstrap distribution. We applied the same approach to the summary of outcomes within each hospital characteristic group. Hospitals with a more precise estimate, primarily driven by larger sample size, lend more weight to the average.
To examine variation in hospital performance, we present the overall range, the mean and median, interquartile range, and the top and bottom deciles. Maps of regional performance classify HRRs and show quintiles of performance.
We used SAS version 9.1 (SAS Institute Inc, Cary, NC) to calculate the measures and perform our descriptive analysis. We created the HRR maps using ArcGIS version 9.3 (ESRI, Redlands, Calif). This work was approved by the Yale University Human Investigation Committee.
We identified nearly 600 000 eligible admissions for the AMI mortality and readmission measures and more than 1 million eligible admissions for the HF measures (Table 1).16 Approximately 4600 hospitals have eligible AMI cases and approximately 4800 have eligible HF cases, representing most of the short-term acute care and critical access hospitals across the nation.
Model goodness of fit is shown in Table 1. The mortality models have greater discrimination than the readmission models, indicated by c-statistics, which, when estimated using patient factors only, are 0.72 and 0.69 for AMI and HF 30-day mortality, respectively, and 0.64 and 0.61 for AMI and HF 30-day readmissions. The adjusted Cox and Snell pseudo R2 values are 0.12 and 0.07 for the AMI and HF mortality models and 0.05 and 0.04 for the AMI and HF readmission models. The IHC coefficients are 0.048 for AMI mortality, 0.053 for HF mortality, 0.015 for AMI readmissions, and 0.026 for HF readmissions.
Hospital-Specific Risk-Standardized Rates
The median (as well as average) 30-day RSMRs among hospitals in the sample are 16.6% for AMI (range, 10.9% to 24.9%; 25th to 75th percentile, 15.8% to 17.4%; 10th to 90th percentile, 14.7% to 18.4%) and 11.1% for HF (range, 6.6% to 19.8%; 25th to 75th percentile, 10.3% to 12.0%; 10th to 90th percentile, 9.4% to 13.1%). The median readmission rates are 19.9% for AMI (range, 15.3% to 29.4%; 25th to 75th percentile, 19.5% to 20.4%; 10th to 90th percentile, 18.8% to 21.1%) and 24.4% for HF (range, 15.9% to 34.4%; 25th to 75th percentile, 23.4% to 25.6%; 10th to 90th percentile, 22.3% to 27.0%) (Table 1). As seen in Table 1 and Figures 1 through 4⇓⇓⇓, distribution of the hospital RSRRs is more tightly concentrated in the center than are the mortality rates, particularly for AMI. For example, the top 10% of hospitals for AMI mortality have a rate that is 25% lower than the bottom 10% of hospitals compared with 39% for HF mortality.
Figure 5A shows the distribution of 30-day RSMRs among HRRs by quintile. Higher AMI mortality is concentrated in a region centered on Oklahoma and Arkansas, extending into New Mexico, Texas, Kansas, Louisiana, Mississippi, Alabama, southern Missouri, and western Tennessee. Lower AMI mortality is found in small, densely populated HRRs primarily in the Northeast. In contrast with mortality, hospitals with low 30-day AMI readmission rates are found in the sparsely populated Northwest and therefore include a substantial portion of the map (Figure 5B). A region stretching northwest from western New Mexico to eastern Washington is almost exclusively composed of HRRs in the bottom quintile for AMI readmissions. Much of Oregon and Washington is also in the bottom quintile for AMI readmissions. Higher AMI readmission hospitals are found in small, compact HRRs primarily in the Northeast. Almost all of the HRRs in the quintile of highest AMI readmissions could be found to the east of a line stretching from the eastern border of Minnesota to the eastern border of Texas and most of the bottom quintile west of such a line.
Higher HF mortality rates, shown in Figure 6A, are concentrated in the 5 westernmost states of the continental United States. In addition, a region centered in Arkansas and including southern Missouri and western Tennessee contains high mortality rates for HF as well as the high AMI mortality rates described above. Hospitals in the quintile of lowest HF mortality are found in small northeastern HRRs. Northeastern New York, Vermont, and New Hampshire constitute an exceptional area of high HF mortality in the Northeast region.
Like AMI readmission rates, HF readmission rates are higher in the eastern United States and low in the western United States (Figure 6B). HF readmission rates in the bottom quintile are found in the region stretching from western New Mexico to eastern Washington that is also characterized by low AMI readmission rates. HRRs with higher HF readmission rates are concentrated in the eastern United States. Besides small urban HRRs along the northeastern Atlantic coast, they can be found in a diagonal band from Arkansas, Louisiana, eastern Missouri, and southern Illinois into western Pennsylvania.
Patterns by Hospital Characteristics
The mean RSMR and RSRR for AMI and HF vary by some hospital characteristics (Table 2). However, the range between the 10th and 90th percentile of the risk-standardized rate among the 2 groups shows considerable overlap (Table 2), with high and low performers in every hospital category. For example, although teaching hospitals have an average RSMR for AMI of 15.0 compared with 16.3 among nonteaching hospitals, the 10th to 90th percentile range of rates for teaching hospitals is 12.5 to 17.8, whereas the range for the nonteaching hospitals is 14.3 to 18.2.
The annual publication of the CMS measures of hospital AMI and HF mortality and readmission rates provides a perspective on national performance for hospital inpatient care for 2 major cardiovascular conditions. This report provides information about how the rates vary nationally and by hospital structural characteristics. The geographic analysis reveals that rates do vary across the nation. The hospital characteristic analysis reveals that certain features of the hospitals tend to be associated with higher performance but that there is overlap in each category, and overall structural characteristics do not appear to dictate performance.
The publicly reported measures differ in several ways from those released in 2008. First, the 2009 release includes readmission measures. Second, the analysis is now based on 3 years of data rather than 1 year. The change has increased the number of hospitalizations that are evaluated and provides more information on which to base the estimates. The limitation, however, is that the data are less contemporary. The current analyses also exclude patients discharged against medical advice.
An insight of the analysis is that for readmission, the distance between the top and bottom quartiles is not very large and the rates are uniformly high. The presence of marked variation is often used as a necessary feature of an area worthy of quality measurement. In this case, however, the high rates and the modest variation may indicate uniformly poor performance nationwide with respect to the transition from inpatient to outpatient status. At the time of the measurement, there were no financial incentives for hospitals or the community to focus on this aspect of care. In fact, hospitals are penalized by reducing readmission rates, as this would adversely affect hospital revenues.
What is needed now is an investment in research that provides insight about how these rates can best be improved. In some cases, such research may involve identifying top performers and investigating how they excel.17 In other cases, there may be a need for further innovation in developing new approaches to acute care or transitions in care settings. Moreover, policymakers will be challenged to determine how best to ensure that efforts to improve care are rewarded, and acceptance of current levels of quality is discouraged.
This report has several limitations to consider. It is intended as a descriptive report that organizes the publicly reported data for the cardiovascular measures in a way that shows regional differences and variation across hospital characteristics. Investigation of these relationships will be an area for future analysis. The measures also have some limitations that are worth noting. They include only patients in fee-for-service Medicare and thus do not include patients who are enrolled in managed care. The measures also exclude patients younger than 65 years, who represent a substantial minority of patients with AMI and a less substantial minority of patients with HF. The models used in these measures were validated against models based on medical record data, but we cannot exclude the possibility of important unmeasured factors that could influence the result. The data sources also do not allow for the identification of patients who are admitted for comfort care and for whom the goal of the hospitalization is not survival. Small-volume hospitals present a particular problem because they provide little information on which to estimate performance. We address this issue, in part, by combining 3 years of data and through the choice of a hierarchical model. The discrimination for the readmission models is low, but this may be because many nonclinical factors play a dominant role in readmission risk. In addition, the factors used in the models are based on patient information at admission rather than discharge because we do not want to adjust for hospital events that may be associated with quality of care. We purposely adjusted for factors available at the time of admission. The result, however, is that we do not include all the factors that could improve prediction. The goal of the model was not to stratify patient readmission risk but to profile hospital quality of care, including what happened during the hospitalization and the immediate period afterward.
The magnitudes of the estimated IHC coefficients, ranging from 0.015 to 0.053, deserve comment. We report these measures in the interest of fully describing the characteristics of our measures, even as there is no consensus about what would represent an acceptable level for the IHC coefficients. The values convey the between-hospital variation over the total variation. The absence of marked between-hospital variation does not indicate that the measure is not useful, because there could be a situation in which performance is uniformly poor and a measure may still have utility. Moreover, the size of the true IHC depends on the joint influence of the number of patients treated at each hospital as well as the true degree of clustering of quality within an institution. In addition, the IHC is positively related to the prevalence of the outcome, so in this case we would not expect large values. For comparison, Gulliford et al18 estimated within-practice correlations of binary outcomes for approximately 200 general practices in the United Kingdom across acute conditions and chronic conditions. The median practice size was 86, and the median intrapractice correlation coefficient was 0.051.
In conclusion, these findings illuminate national hospital performance for AMI and HF mortality and readmission over a recent 3-year period, coincident with public reporting of similar data at the hospital level. In particular, the readmission rates are quite high and may represent a marked opportunity for improvement. The publication of these measures and the recognition of patterns of performance should lead to efforts to better understand the key determinants of performance and to improve patient outcomes.
We thank Joel Smith, Kerianne Hourihan, and Sandi Nelson at Mathematica Policy Research Inc for data and analytic support and gratefully acknowledge Dr Lein Han from the Centers for Medicare & Medicaid Services.
Sources of Funding
The analyses on which this publication is based were performed under contract HHSM-500-2008-00020I (0001), entitled “Production and Implementation of Hospital Outcome and Efficiency Measures,” and contract HHSM-500-2008-00025I (0001), entitled “Development and Re-Evaluation of the CMS Hospital Outcomes and Efficiency Measures,” both sponsored by CMS, Department of Health and Human Services. The views expressed in this article are those of the authors and do not necessarily reflect the official position of CMS or the US Department of Health and Human Services.
Drs Krumholz, Chen, Yun Wang, Lin, Normand, and Drye, G.C. Schreiner, and Yongfei Wang work under contract with CMS to develop and maintain performance measures. Drs Merrill and Schone work under contract with CMS to produce and implement the outcome measures. Drs Straube and Rapp are employed by CMS. Dr Bradley reports no disclosures.
Guest Editor for this article was Paul A. Heidenreich, MD.
Hospital Compare. Department of Health and Human Services. http://www.hospitalcompare.hhs.gov. Accessed 24 June 2009.
Krumholz HM, Normand SL, Spertus JA, Shahian DM, Bradley EH. Measuring performance for treating heart attacks and heart failure: the case for outcomes measurement. Health Aff (Millwood). 2007; 26: 75–85.
Health Care Financing Administration. International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM), 1997; Publication No. 97–1260.
Krumholz HM, Normand S-LT, Keenan PS, Lin ZQ, Drye EE, Bhat KR, Wang Y-F, Ross JS, Schuur JD, Stauffer BD, Bernheim SM, Epstein AJ, Herrin J, Federer JJ, Mattera JA, Wang Y, Mulvey G, Schreiner G. Hospital 30-day heart failure readmission measure methodology. Report prepared for the Centers for Medicare & Medicaid Services. http://www.qualitynet.org/dcs/ContentServer?c=Page&pagename=QnetPublic/Page/QnetTier4&cid=1219069855841. Accessed 24 June 2009.
Keenan PS, Normand S-LT, Lin Z, Drye EE, Bhat KR, Ross JS, Schuur JD, Stauffer BD, Bernheim SM, Epstein AJ, Wang Y-F, Herrin J, Chen J, Federer JJ, Mattera JA, Wang Y, Krumholz HM. An administrative claims measure suitable for profiling hospital performance on the basis of 30-day all-cause readmission rates among patients with heart failure. Circulation. 2008; 1: 29–37.
Krumholz HM, Brindis RG, Brush JE, Cohen DJ, Epstein AJ, Furie K, Howard G, Peterson ED, Rathore SS, Smith SC Jr, Spertus JA, Wang Y, Normand SL. Standards for statistical models used for public reporting of health outcomes: an American Heart Association Scientific Statement from the Quality of Care and Outcomes Research Interdisciplinary Writing Group: cosponsored by the Council on Epidemiology and Prevention and the Stroke Council: endorsed by the American College of Cardiology Foundation. Circulation. 2006; 113: 456–462.
Krumholz HM, Wang Y, Mattera JA, Wang Y, Han LF, Ingber MJ, Roman S, Normand SL. An administrative claims model suitable for profiling hospital performance based on 30-day mortality rates among patients with heart failure. Circulation. 2006; 113: 1693–1701.
Krumholz HM, Wang Y, Mattera JA, Wang Y, Han LF, Ingber MJ, Roman S, Normand SL. An administrative claims model suitable for profiling hospital performance based on 30-day mortality rates among patients with an acute myocardial infarction. Circulation. 2006; 113: 1683–1692.
Ross JS, Mulvey GK, Stauffer BD, Patlolla V, Bernheim SM, Keenan PS, Krumholz HM. Statistical models and patient predictors of readmission for heart failure: a systematic review. Arch Intern Med. 168: 1371–1386.
American Hospital Association. Health Forum, LLC. http://www.ahadata. com/ahadata/html/AHASurvey.html. Accessed 24 June 2009.
The Dartmouth Atlas. Dartmouth Atlas Project. http://www.dartmouthatlas. org/faq/data.shtm. Accessed 24 June 2009.
QualityNet. Registration: Hospitals: Inpatient. http://qualitynet.org. Accessed 24 June 2009.