Patient Satisfaction at America's Lowest Performing Hospitals
Background—Previous studies have identified hospitals with poor performance on cardiac process measures. How these hospitals fare in other domains, such as patient satisfaction, remains unknown.
Methods and Results—We used Hospital Compare data to identify hospitals reporting acute myocardial infarction (AMI) and heart failure (HF) process measures during 2006 to 2008, and calculated respective composite performance scores. Using these scores, we classified hospitals as low-performing (bottom decile for all 3 years), top-performing (top decile for all 3 years), and intermediate (all others). We used Hospital Consumer Assessment of Healthcare Providers and Systems 2008 data to compare overall satisfaction between low, intermediate, and top-performing hospitals. Low-performing hospitals had fewer beds, fewer nurses per patient, and were more likely rural, safety-net hospitals located in the South, compared with intermediate and top-performing hospitals (P<0.01 for all). After adjusting for hospital characteristics, patients were less likely to recommend low-performing hospitals to family or friends, relative to intermediate and top-performing hospitals (AMI: 58.8% versus 63.9% versus 68.8%, HF: 61.3% versus 64.0% versus 66.8%; P<0.001 for all), or provide an overall rating of ≥9 out of 10 (AMI: 56.7% versus 60.7% versus 64.9%, HF: 57.8% versus 61.1% versus 63.6%; P<0.01 for all). Despite the association between the hospital's performance on process measures and patient satisfaction, we noted discordance between these measures (kappa statistic <0.20).
Conclusions—Hospitals with consistently poor performance on cardiac process measures also have lower patient satisfaction on average, suggesting that these hospitals have overall poor quality of care. However, there is discordance between the 2 measures in profiling hospital quality.
Improving the quality of health care in the United States is a national priority.1,2 As part of these efforts, hospital performance measures are increasingly being used to benchmark quality.3 Current payment reforms from the Center for Medicare & Medicaid Services (CMS) include a 2% financial penalty for acute care hospitals that do not report quality data (pay for reporting).4 Beginning in 2012, under the Patient Protection and Affordable Care Act (P.L.111–148), CMS will seek to reimburse hospitals according to their actual performance on several key quality measures (pay for performance).4,5
Prior research has identified a group of hospitals with consistently poor performance for cardiac care, based on low adherence to important processes of care.6 These low-performing hospitals tend to be rural, safety-net facilities that serve populations with lower socio-economic status. Under the proposed payment structure, these hospitals stand to face significant financial penalties if their performance does not improve. However, critics have argued that hospital classification based on process measures alone may be problematic due to imprecision arising from lower case volume at low-performing hospitals and the poor reliability of hospitals' self-reported data.7
In recent years, patient satisfaction has been recognized as an important quality metric.8 As opposed to process measures that may be subject to “gaming” or outcome measures that may be limited by incomplete risk adjustment, patient satisfaction data are reported directly by patients and may provide a valuable instrument to determine a hospital's quality. The development of the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey has allowed patient satisfaction measures to be formally incorporated into hospital evaluation and reimbursement.
However, it remains unknown how hospitals with consistently poor performance on process measures fare on patient satisfaction ratings. Evaluating this relationship is important because, if hospitals with low process measure performance also perform poorly on patient satisfaction ratings, then this could be construed as additional evidence of poor quality of care at these facilities, which may be in need of focused attention.
To address this gap in knowledge, we examined patient satisfaction at hospitals that have consistently poor performance on process measures for 2 cardiac diseases, acute myocardial infarction (AMI) and heart failure (HF), and compared it with patient satisfaction at hospitals with intermediate and high performance.
Hospitals that are consistently poor performers on process of care measures for cardiac diseases are structurally distinct compared with better performing hospitals (smaller facilities, fewer nurses per patient, more likely rural, safety-net hospitals).
Risk-adjusted mortality at low-performing cardiac hospitals is significantly worse compared with better performing hospitals.
Little is known about the association between hospital performance and patient satisfaction ratings.
Average patient satisfaction ratings were lower at low-performing hospitals compared with intermediate and top-performing hospitals, after adjusting for hospital characteristics.
Despite this, there is heterogeneity in patient satisfaction ratings within hospital performance groups, with existence of low-performing hospitals that have better than average satisfaction ratings and top-performing hospitals with below average satisfaction ratings.
We relied on 3 primary data sources: (1) the CMS Hospital Compare database, 2006 to 2008, (2) the American Hospital Association annual survey, 2006, and (3) the United States Census, 2000.
The Hospital Compare database provides information on processes and outcomes of care for select conditions, and patient satisfaction with care (HCAHPS).9 Given that there are financial penalties for hospitals that do not report these data to CMS,10 participation in Hospital Compare has become nearly universal. We downloaded hospital level process measures and HCAHPS patient satisfaction measures from the Hospital Compare website.
We were primarily interested in hospital performance for 2 cardiac diseases, AMI and HF, as these conditions are highly prevalent and have a rich evidence base supporting the development of process measures. We excluded all critical access hospitals (n=831), as these hospitals are not required to report process measures data, hospitals with less than 25 eligible patients for recommended therapies (n=919 for AMI; n=271 for HF),3,6 and hospitals located outside the United States and District of Columbia (n=54 hospitals in United States' territories).
For the remaining hospitals, we obtained data on 7 measures reported for AMI and 4 measures for HF during 2006 to 2008. Performance measures for AMI assessed the percentage of eligible patients who received (1) aspirin on arrival, (2) aspirin at discharge, (3) beta blockers at discharge, (4) angiotensin converting enzyme-inhibitors or angiotensin receptor blockers for left ventricular systolic dysfunction, (5) advice on smoking cessation, (6) fibrinolytic medication within 30 minutes of arrival, and (7) primary percutaneous coronary intervention within 90 minutes of arrival during each year. Performance measures for HF assessed the percentage of eligible patients who received (1) discharge instructions, (2) evaluation of left ventricular systolic function, (3) angiotensin converting enzyme-inhibitors or angiotensin receptor blockers drugs for systolic dysfunction, and (4) advice on smoking cessation. Although an additional AMI measure (beta blocker on arrival) was reported previously, it was dropped in 2008 based on evidence of harm with this therapy during the first 24 hours.11 We excluded this measure from our analysis for all 3 years.
Identification Low-, Intermediate-, and Top-Performing Hospitals
For each hospital, we calculated a composite performance score for AMI and HF performance for each year using the opportunities scoring method.12 This was done by dividing the total number of times each treatment was administered (numerator) by the total number of opportunities for each therapy (denominator), multiplied by 100. Next, we stratified hospitals into deciles based on their composite performance scores for each year. We defined low-performing hospitals as hospitals in the bottom decile of performance for each of the 3 years, top-performing hospitals as those in the top decile of performance for each year, and intermediate hospitals as all others.
We used the HCAHPS data for hospital-level satisfaction between April 2008, and March 2009. Details of survey development, psychometric testing, and factor analyses have been reported previously.13–15 Briefly, patients aged 18 years or older, with a nonpsychiatric discharge diagnosis and who were alive at discharge, were eligible to receive the survey. Data on the number of patients completing the survey (<100, 100–299, ≥300) and the survey response rate (number of completed surveys divided by the total number of patients surveyed expressed as a percentage) also was available.
In addition to providing information on the quality of interpersonal exchange between patients and staff and amenities of care (reported under 8 domains), the HCAHPS survey includes 2 measures of overall satisfaction (see online-only Data Supplement Table I).8 These include (1) whether the patient would recommend the hospital to family and friends, with responses grouped into definitely yes, probably (yes or no), and definitely no, and (2) a global rating of the hospital on a scale of 0 to 10, with 0 being the worst and 10 being the best a hospital can be, with ratings grouped into 3 categories (0–6, 7–8, and 9–10). For each hospital, HCAHPS also reports on the percentage of survey responses within each category. Because overall satisfaction ratings are highly correlated with individual items in the HCAPS survey,8 we focused only on the 2 overall ratings.
To ensure that publicly reported HCAHPS scores allow a fair and accurate comparison between hospitals, all survey responses are adjusted for differences in survey mode of administration (mail only, telephone only, mail and telephone, and active-interactive voice response) and patient mix (age, education, self-reported health status, service line [medical, surgical, or maternity], admission from emergency room, non-English primary language, and the relative lag between hospital discharge and survey completion) prior to public reporting. The adjustment coefficients are derived from a large scale validation experiment conducted by CMS prior to the national implementation of HCAHPS.13 In that study, it was determined that after adjustment for survey mode and patient mix, no additional adjustment for nonresponse was necessary.
The Hospital Compare database provides information on key hospital characteristics: ownership status, for profit/not for profit, hospital state, and zip code. We categorized each hospital's geographic location into United States census regions, and as rural or urban using zip code level, commuting area codes derived from the United States Census 2000 data.16 We obtained additional hospital level data by linking the Hospital Compare data to the 2006 American Hospital Association survey using each hospital's unique identification number. Variables that we used included annual admission volume, number of beds, nurse staffing levels, teaching status (membership in council of teaching hospitals), and percentage of patients receiving Medicaid.17 We calculated the ratio of nurse to patient days by dividing the number of nurse full-time equivalents on staff by 1000 patient days. Finally, we categorized hospitals as safety-net if the hospital's Medicaid caseload for 2006 exceeded the mean for all hospitals in the state by 1 standard deviation.6 Nineteen AMI hospitals (1 low-performing and 18 intermediate-performing), and 30 HF hospitals (2 low-performing, 26 intermediate-performing, and 2 top-performing hospitals) could not be linked to the American Hospital Association dataset and were excluded.
We compared characteristics of low-, intermediate-, and top-performing hospitals using χ2 test and Mantel-Haenszel test of trend for categorical variables and linear regression for continuous variables. We also compared the number of patients completing the survey and the survey response rate across hospital groups using similar tests. Next, we compared hospital level patient satisfaction using the 2 overall satisfaction measures between low-performing, intermediate, and top-performing hospitals using multivariable linear regression while adjusting for differences in hospital characteristics (annual admission volume [per 1000], number of beds [<100, 100–400, >400], nurse full time equivalent [FTE] per 1000 patient days, teaching status, ownership status [for profit, nonprofit], location [urban, rural], safety-net status, and census region [Midwest, Northeast, South, and West]). We also explored the degree of discordance between hospital categorization based on process measure performance, and the overall patient satisfaction ratings by examining the proportion of low-, intermediate-, and top-performing hospitals within each disease category (AMI and HF) that were in the top quartile, top half, and bottom quartile of satisfaction ratings. Finally, we categorized hospitals separately into quartiles based on 2008 composite scores and overall patient satisfaction ratings. We then compared agreement in hospital classification based on these 2 measures using kappa statistics.
In our multivariable modes, we used a combination of statistical and graphical methods to examine model assumptions of normality and homogeneity of variance of the error term. Because the assumption of homogeneity of variance was not satisfied, we applied different estimates of the variances for each of the hospital groups (low-performing, intermediate-performing, top-performing). This was done using Proc Mixed in SAS with “Repeated” statement and “Group” option. This model allowed us to estimate directly the variance of our dependent variable for each hospital group, which was used to perform hypothesis tests.
To determine if our results were sensitive to our categorization of hospital performance, we repeated these analyses using alternative thresholds for defining low-performing and top-performing hospitals as the bottom and top 20%, and bottom and top 25% of all hospitals, respectively, based on their composite performance scores.
All analyses were performed using SAS version 9.2 (SAS Institute Inc.). All probability values are 2-sided. The study was approved by the Institutional Review Board at the University of Iowa.
Among all hospitals that reported data on performance measures during 2006 to 2008, 2467 hospitals for AMI (72% of all hospitals) and 3115 (91% of all hospitals) for HF met the eligibility criteria for inclusion in the study (Table 1). Of these, 88 AMI hospitals and 147 HF hospitals were consistently low-performing, while 49 AMI hospitals and 105 HF hospitals were consistently top-performing. Only 19 hospitals were low-performing for both AMI and HF, and 18 hospitals were top-performing for both diseases. None of the top-performing AMI (and HF) hospitals were in the low-performing HF (and AMI) category.
For AMI, mean composite performance score ranged from 78% at low-performing hospitals to over 99% at top-performing hospitals (see online-only Data Supplement Table II). For HF, mean score ranged from 48% at low-performing hospitals to 99% at top-performing hospitals (see online-only Data Supplement Table III). Importantly, for both AMI and HF, low-performing hospitals had lower annual admission volume, fewer beds, lower nurse FTE per 1000 patient days, and were more likely to be rural, safety-net hospitals. More than half of the low-performing AMI and HF hospitals were located in the South census region (see online-only Data Supplement Tables II and III).
Five AMI hospitals (all intermediate-performing) and 12 HF hospitals (1 low-performing, 10 intermediate-performing, and 1 top-performing) did not report HCAHPS data to CMS during the study period. Among reporting hospitals, overall survey response rate was significantly lower at low-performing hospitals for both AMI and HF when compared with better performing hospitals for both diseases (Table 2). Nearly all hospitals had at least 100 respondents, although low-performing AMI and HF hospitals had fewer respondents than intermediate and top-performing hospitals in both groups.
Overall, 66.4% of respondents at AMI hospitals reported that they would definitely recommend the hospital in which they received their care to their family and friends, and 62.2% of patients rated the care they received to be of a high quality (9 or 10). We found that low-performing AMI and HF hospitals scored significantly lower in both these domains of patient satisfaction, on average compared with intermediate and top-performing hospitals (Figure 1A and B, and Table 3; P<0.001 for both comparisons). The above results remained significant and largely unchanged, even after adjustment for several hospital characteristics, with differences in satisfaction ratings as high as 10 percentage points (Table 3). Importantly, a lower ratio of nurse FTEs to patients, higher bed size, and for profit ownership were independently associated with lower patient satisfaction (see online-only Data Supplement Table IV; P<0.001 for each comparison). The relationship between patient satisfaction ratings and hospital performance persisted in sensitivity analysis that used an alternative definition for low performance (bottom quartile and bottom quintile of performance for 3 consecutive years, respectively; see online-only Data Supplement Table V).
Table 4 shows the degree of agreement between performance on AMI and HF composite measures and overall patient satisfaction rating. We found that among low-performing AMI hospitals, 51.4% were in the bottom quartile and 79.5% were in the bottom half of patient satisfaction, whereas among top-performing AMI hospitals, 51.0% were in the top quartile, and 69.4% were in the top half of patient satisfaction rating (whether a patient would recommend the hospital to their family or friends). Formal analyses of discordance using only 2008 process measures and patient satisfaction data revealed a weak agreement in these 2 measures of profiling hospital quality (weighted kappa statistic 0.19; Table 4). The discordance was much more pronounced when we examined patient satisfaction ratings at hospitals profiled on HF performance measures. We found that 39.6% of the low-performing HF hospitals were in the top half of patient satisfaction ratings, whereas 40% of the top-performing HF hospitals were in the bottom half of patient satisfaction (using the same rating). Agreement between patient satisfaction ratings and HF performance measures was even weaker (weighted kappa statistic 0.07; Table 4). These findings were similar when we conducted the above analyses with the global rating of a hospital on a scale of 0 to 10 (Table 4).
We found that hospitals that consistently perform poorly on cardiac process measures also perform poorly on patient satisfaction, suggesting that poor quality clinical care is perceived by patients. The difference in overall satisfaction between hospital categories was significant even after adjusting for important hospital characteristics that are previously known to influence patient satisfaction. Although patient satisfaction ratings were lower on average at low-performing hospitals compared with better performing hospitals, there was evidence of discordance in performance on process measures and patient satisfaction ratings, especially for HF. A number of our findings are important and merit further discussion.
While several studies have reported on the association between clinical process measures and patient satisfaction,8,18–20 our study was focused specifically on hospitals with consistently poor performance on cardiac illnesses. Our study reiterates the findings from our previous work showing that consistently low-performing cardiac hospitals differ from better performing hospitals with regards to hospital structure and organization; these hospitals are smaller, rural facilities that are predominantly concentrated in the South, and have higher risk-adjusted mortality.6 The current analyses add to these findings by demonstrating that poor process measure adherence is also associated with lower patient satisfaction among surviving patients at these hospitals. Together, the 2 studies suggest that there is a discrete group of hospitals with consistently poor process measure adherence, consistently high risk-adjusted mortality, and consistently poor patient satisfaction, and further strengthens the case for quality improvement at these hospitals.
Based on these results, one might argue that quality improvement initiatives focused at this discrete group of hospitals could theoretically magnify improvements in health care and positively impact care for vulnerable patients. However, pay for performance programs as currently envisioned in the Patient Protection and Affordable Care Act4 may fall short of this objective. Low-performing hospitals may be disadvantaged if they lack the resources necessary to engage in quality improvement efforts. By rewarding top performance or net improvement and penalizing low-performing hospitals, pay for performance could worsen disparities and adversely impact care of the poor, underserved, minority patients that seek care at these hospitals.21 While a recent study using data from the Premier initiative has challenged these concerns,22 the fact remains that hospitals that participated in Premier were financially secure with greater ability and commitment toward quality improvement, compared with the average hospital.23 Thus, policy makers would need to go beyond current pay for performance incentives to spur improvement in quality at these low-performing hospitals. To accomplish that, a firm understanding of the factors associated with poor performance at these hospitals (eg, organizational values, leadership, and communication),24 their community benefit, and the alternative choices available to patients who seek care at these hospitals, is necessary to better inform policy.
Under the proposed value based purchasing plan, patient satisfaction is likely to be an integral part of hospital reimbursement. While an important quality metric from a patient's perspective, inclusion of patient satisfaction as a performance measure may pose some problems. First, unlike process measures (eg, prescribing aspirin to patients with AMI), patient satisfaction is not a discrete intervention; it is a complex multidimensional construct, the correlates of which are not fully understood.25 Inclusion of satisfaction measures for incentive payment presupposes that hospitals with lower satisfaction scores “know” how to improve satisfaction at their hospitals. Although, based on this study, it is tempting to think that greater adherence to process measures might result in improved patient satisfaction with care, unmeasured patient, hospital, or physician characteristics certainly could explain the association we observed. Additionally, hospital characteristics that are associated with patient satisfaction are not easily modifiable (urban location, nonprofit status, number of beds), and scant data exist to show whether hospital investment in the modifiable factors (greater number of nurses) will result in improved quality of care. Thus, without a careful understanding of the determinants of patient satisfaction, such a proposal might result in misguided investments by hospitals in programs that may not be effective at improving patient satisfaction or the overall quality of care.
The relatively robust association that we observed between process measures and satisfaction is not consistent across all studies.8,18–20 Some of these differences are likely due to differences in study design. Because we used 3 years of consecutive process measures' data instead of a single year, low-performing hospitals in our study are an extreme group of low quality hospitals, by definition. Therefore, it is not surprising that the differences in patient satisfaction scores observed in our study are larger than have been previously reported.
Although we found that patient satisfaction ratings were on average lower at low-performing hospitals, there was heterogeneity in satisfaction ratings within hospital groups, suggesting the presence of low-performing hospitals with high patient satisfaction ratings and vice versa. This was especially true for HF where we found that nearly 40% of low-performing HF hospitals had better than average patient satisfaction ratings, and 40% of top-performing HF hospitals had below average patient satisfaction ratings. The observed association between hospital performance and patient satisfaction notwithstanding, these findings illustrate that process measures and satisfaction ratings measure relatively distinct facets of hospital quality and support the notion that evaluation of hospital quality should be based on multiple measures. Future studies aimed at developing a better understanding of the factors that might explain the variability in patient satisfaction ratings at low-performing hospitals are warranted.
Our study should be interpreted in light of the following limitations. First, our analyses were based on data that is self-reported by hospitals that may be subject to “gaming”; CMS is planning on expanding its current auditing practices to improve reliability. Second, our choice of classifying hospitals into low-, intermediate, and top-performing groups is somewhat arbitrary. To ensure the robustness of our results, we conducted sensitivity analyses using alternate cut points, which also yielded similar results. Third, we only had access to aggregate hospital-level HCAHPS survey data; data at the patient level was not available, limiting our ability to assess satisfaction only in patients with cardiovascular diseases. Despite that, we found lower hospital-wide patient satisfaction at low-performing cardiac hospitals, suggesting broader issues in organization and delivery of care at these facilities. Fourth, our findings do not establish causality between poor hospital performance on process measures and lower satisfaction ratings at these hospitals. Finally, while overall survey response rates were low, these were significantly lower at low-performing hospitals. The HCAHPS survey is adjusted for differences in patient mix, including nonresponse prior to public reporting. Any residual bias would only strengthen our findings because nonresponse is negatively associated with satisfaction.13
Our study found that low-performing cardiac hospitals based on process measures have on average lower satisfaction ratings compared with better performing hospitals, suggesting that there is a dire need to improve quality of care at this easily identifiable group of hospitals. However, there is discordance between the 2 measures of profiling hospital quality.
Sources of Funding
Dr Saket Girotra is a Fellow in the Division of Cardiovascular Diseases, Department of Medicine at University of Iowa. Dr Peter Cram is supported by a K24 award from the National Institute of Arthritis, Musculoskeletal and Skin Diseases (AR062133) and by the Department of Veterans Affairs. Dr Ioana Popescu is supported by a K08 award from the National Heart, Lung and Blood Institute (NHLBI, HL095930-01). This work is also funded in part by R01 HL085347 from NHLBI and R01 AG033035 from the National Institute of Aging at the National Institute of Health (Dr Peter Cram).
The online-only Data Supplement is available at http://circoutcomes.ahajournals.org/lookup/suppl/doi:10.1161/CIRCOUTCOMES.111.964361/-/DC1.
- Received December 4, 2011.
- Accepted March 19, 2012.
- © 2012 American Heart Association, Inc.
Crossing the Quality Chasm: A New Health System for the 21st Century. The Institute of Medicine. National Academy Press; 2001.
Patient Protection and Affordable Care Act, P.L. 111-148. http://frwebgate.access.gpo.gov/cgi-bin/getdoc.cgi?dbname=111_cong_bills&docid=f:h3590enr.txt.pdf. Accessed on December 4, 2011.
- Popescu I,
- Werner RM,
- Vaughan-Sarrazin MS,
- Cram P
Hospital Compare Database. http://www.hospitalcompare.hhs.gov. Accessed on December 4, 2011.
Reporting Hospital Quality Data for Annual Payment Update (RHQDAPU) http://www.qualitynet.org/dcs/ContentServer?cid=1138115987129&pagename=QnetPublic%2FPage%2FQnetTier2&c=Page Accessed on December 4, 2011.
- Chen ZM,
- Pan HC,
- Chen YP,
- Peto R,
- Collins R,
- Jiang LX,
- Xie JX,
- Liu LS
- Peterson ED,
- DeLong ER,
- Masoudi FA,
- O'Brien SM,
- Peterson PN,
- Rumsfeld JS,
- Shahian DM,
- Shaw RE
Rural Urban Commuting Area Codes. http://www.ers.usda.gov/Data/RuralUrbanCommutingAreaCodes/ Accessed on December 4, 2011.
American Hospital Association Annual Survey Database. http://www.ahadata.com/ahadata/html/AHASurvey.html Accessed on December 4, 2011.
- Glickman SW,
- Boulding W,
- Manary M,
- Staelin R,
- Roe MT,
- Wolosin RJ,
- Ohman EM,
- Peterson ED,
- Schulman KA
- Lee DS,
- Tu JV,
- Chong A,
- Alter DA