Patient Satisfaction and Its Relationship With Clinical Quality and Inpatient Mortality in Acute Myocardial Infarction
Background— Hospitals use patient satisfaction surveys to assess their quality of care. A key question is whether these data provide valid information about the medically related quality of hospital care. The objective of this study was to determine whether patient satisfaction is associated with adherence to practice guidelines and outcomes for acute myocardial infarction and to identify the key drivers of patient satisfaction.
Methods and Results— We examined clinical data on 6467 patients with acute myocardial infarction treated at 25 US hospitals participating in the CRUSADE initiative from 2001 to 2006. Press Ganey patient satisfaction surveys for cardiac admissions were also available from 3562 patients treated at these same 25 centers over this period. Patient satisfaction was positively correlated with 13 of 14 acute myocardial infarction performance measures. After controlling for a hospital’s overall guideline adherence score, higher patient satisfaction scores were associated with lower risk-adjusted inpatient mortality (P=0.025). One-quartile changes in both patient satisfaction and guideline adherence scores produced similar changes in predicted survival. For example, a 1-quartile change (75th to 100th) in either the patient satisfaction score or the guideline adherence score yielded the same change in predicted survival (odds ratio, 1.24; 95% CI, 1.02 to 1.49; and odds ratio, 1.24; 95% CI, 1.08 to 1.41, respectively). Satisfaction with nursing care was the most important determinant of overall patient satisfaction (P<0.001).
Conclusions— Higher patient satisfaction is associated with improved guideline adherence and lower inpatient mortality rates, suggesting that patients are good discriminators of the type of care they receive. Thus, patients’ satisfaction with their care provides important incremental information on the quality of acute myocardial infarction care.
Received August 7, 2009; accepted December 23, 2009.
A large number of hospitals now routinely use patient satisfaction survey instruments and data to assess their quality of care.1–4 In addition, the Centers for Medicare and Medicaid Services (CMS) recently developed a national, standardized survey instrument and data collection methodology for measuring patients’ perceptions of their hospital experiences; this instrument is called the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey.5–7 The first set of HCAHPS data were made publicly available in March 2008 to enable consumers to make comparisons of patient experiences across hospitals.
Despite the popularity of these survey instruments, important questions remain about the use of satisfaction data to assess healthcare quality. Do these data provide valid information about the medically related quality of hospital care, and if so, do these data provide independent information on the overall quality of patient care beyond that obtained from the more accepted clinical performance measures? Are hospitals that have higher levels of patient satisfaction more likely also to produce better health outcomes? Which hospital experiences best account for patients’ overall satisfaction?
This article explores the relationship between a hospital’s overall patient satisfaction score, its overall clinical quality score, and its risk-adjusted inpatient mortality rate for patients with acute myocardial infarction (AMI) using data from a clinical quality improvement initiative coupled with patient satisfaction survey data collected by an independent third party. Specifically, we examine whether (1) patient satisfaction is associated with the quality of cardiac care as measured by adherence to practice guideline recommendations, (2) whether patient satisfaction is an independent predictor of a hospital’s inpatient mortality rate for AMI, and (3) which aspects of a patient’s interactions with a hospital’s facilities and staff are the most important determinants of their overall satisfaction.
WHAT IS KNOWN
The Institute of Medicine has identified patient-centered care, or care that is “respectful of and responsive to individual patient preferences, needs, and values and ensures that patient values guide all clinical decisions,” as a key quality domain.
Hospitals routinely use patient satisfaction surveys to assess the quality of care, although it remains unclear whether patient satisfaction data provide valid information about the medically related quality of hospital care.
WHAT THE STUDY ADDS
Higher patient satisfaction is associated with lower inpatient mortality rates for acute myocardial infarction, even after controlling for hospital adherence to evidence-based practice guidelines, suggesting that patients are good discriminators of the type of care they receive.
Patients seem to differentiate between the technical (eg, quality of nurses and physicians) and nontechnical aspects (room décor, quality of food) of medical care.
Patients’ satisfaction with their care provides important incremental information on the quality of acute myocardial infarction care beyond clinical performance measures.
Quarterly clinical process-of-care and patient characteristic information were obtained from the Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes with Early Implementation of the ACC/AHA Guidelines (CRUSADE) quality improvement registry.8–12 CRUSADE centers collected and submitted clinical information regarding in-hospital care and outcomes of patients with non–ST-segment acute coronary syndrome with high-risk clinical features, including positive cardiac biomarkers or ischemic ST-segment ECG changes.
Quarterly patient satisfaction data were obtained from patient surveys administered by Press Ganey Associates (South Bend, Ind). Patients eligible to receive a survey included those discharged alive from the hospital, with the exception of patients transferred to another hospital using Press Ganey surveys and patients who had already been surveyed within the prior 30 days. Patients were surveyed within 1 week of hospital discharge. Only surveys for patients with cardiac diagnosis-related groups (DRG) were used for this study (including DRGs 121, 122, 124, 125, 140, and 143).
Of the 568 hospitals that participated in CRUSADE between January 2001 and December 2006, we identified and contacted 110 hospitals that also collected Press Ganey survey data sometime during the same period. Forty-five of these hospitals granted permission to use their patient satisfaction data for this study. Using the hospital quarter as our unit of analysis, we first eliminated any quarterly patient satisfaction data from a given hospital for which we did not have at least 3 patient responses. Next, we matched the remaining quarterly observations across the 2 data sources and eliminated hospital quarters for which we did not have both clinical and satisfaction data. This yielded a total of 207 matched hospital quarter observations from 29 hospitals. Finally, because we wanted to control for individual hospital effects in our analysis, we eliminated 4 hospitals for which we did not have at least 2 quarters of matched CRUSADE and patient satisfaction data. These procedures reduced our relevant dataset to 203 quarterly observations at 25 hospitals.
We calculated quarterly hospital-level adherence scores from the CRUSADE database for 14 different Class I evidence-based guidelines from the American College of Cardiology (ACC) and American Heart Association (AHA) guidelines for the treatment of AMI. We calculated hospital-level adherence scores for each measure using the same scoring method as used by CMS in the Hospital Quality Incentive Demonstration pay-for-performance program.13 That is, we calculated scores for AMI by summing the number of times each therapy was administered and dividing this amount by the sum of total eligible opportunities for all patients at the hospital. We then divided the 14 clinical processes into 3 categories (acute, discharge, and secondary prevention) and calculated separate composite scores for each category using the CMS scoring method. We also calculated an overall hospital-level composite using all 14 measures. Patient eligibility for relevant measures was determined according to defined ACC/AHA guideline indications and reported contraindications. Patients who died anytime during their hospital stay or who were transferred to another hospital were excluded from discharge care assessment. In-hospital mortality was defined as death from any cause during a patient’s hospital stay within the relevant quarter. Inpatient mortality was adjusted for a patient risk score that was calculated by a logistic model which included demographic and clinical characteristics previously identified to predict risk in a cohort of patients with acute coronary syndrome without persistent ST-segment elevation.14
The underlying patient satisfaction data comprised patient satisfaction scores on 9 different dimensions of the hospital experience (nurses, personal issues, admission, physicians, visitors and family, discharge, meals, room, and tests and treatments) and 1 overall patient assessment of this experience. Each of these 10 satisfaction scores was based on multiple questions for that aspect of the experience (supplemental Appendix 1). The overall patient assessment score was the average of 3 questions: “How well staff worked together to care for you”; “Likelihood of your recommending this hospital to others”; and “Overall rating of care given in a hospital.” All patient satisfaction questions were scored on a 5-point scale anchored by the words “very poor” and “very good” and then converted to a 100-point scale where zero represented “very poor” and 100 represented “very good.” Quarterly averages for each hospital were obtained by averaging over all of the obtained surveys on that particular score.
The hospital quarter was the unit of study for all analyses. Pairwise Pearson product-moment correlation coefficients were computed between quarterly hospital patient overall satisfaction scores and the 14 individual quarterly hospital clinical process scores and risk-adjusted inpatient mortality for AMI.
We used multivariable logistic regression to investigate whether patient overall satisfaction was associated with risk-adjusted mortality after controlling for clinical quality. In each of these analyses, the dependent variable was based on risk-adjusted inpatient survival (1−mortality) for the particular hospital quarter. Consequently, hospital quarters with more outcome opportunities were weighted more heavily. The independent variables were based on the overall patient satisfaction score and composite guideline score for each hospital quarter. We also used weighted least squares (WLS) linear regression, in which the dependent variable was the proportion of surviving AMI patients, and obtained almost identical results. However, because the logistic regression results provide an easy way to compare the relative magnitude of improvement in survival due to changes in both patient satisfaction scores and performance scores, we only report the logistic regression findings. We also performed a random mixed effects model to account for correlation of quarterly observations within hospitals. The results of the mixed model were similar—both in direction and magnitude of effect—to the main analyses, so we only report the logit results.
Next, we conducted the Durbin-Wu-Hausman test15 to determine whether the patient overall satisfaction measure was correlated with fixed but unobserved hospital effects such as hospital size and facilities, administrative expertise, and academic affiliation. We performed this test to determine whether it was necessary to control for such fixed effects in our analysis or if we could use the more efficient estimator obtained from an analysis excluding fixed effects variables (ie, 25 hospital dummy variables). The Durbin-Wu-Hausman analysis was conducted by running a multivariate logistic regression with mortality as the dependent variable with the following 3 independent variables: the quarterly overall clinical composite score, the quarterly patient overall satisfaction score, and the residual errors from an analysis of quarterly patient overall satisfaction. The residuals come from an equation with overall satisfaction as the dependent variable and 25 hospital dummy variables and quarterly overall clinical performance as independent variables.
Next, we used a WLS model to determine the association of average answers to each of the individual survey sections (ie, nurses, physicians, meals, etc) with overall patient satisfaction. The unit of analysis was the hospital quarter, and the weights reflected the number of patient surveys in the given quarter.
Finally, we performed analyses to ascertain whether our study population was representative of the larger Press Ganey and CRUSADE populations that were excluded from the study because we could not match data between the hospitals. We repeated the analysis for the relationship of overall satisfaction and the 9 different dimensions of patient satisfaction for the 262 hospital quarters of patient data that were excluded because we did not have equivalent hospital quarter clinical data. Additionally, we ran logistic regression where the dependent variable was risk-adjusted inpatient mortality and the independent variable was overall clinical performance for the excluded sample of 6082 hospital quarters for those CRUSADE hospitals for which we did not have matched patient satisfaction data. We compared the coefficients from these additional models with our study data using the Chow F test or the Wald test, depending on whether we used WLS or logistic regression.16
All analyses were performed using JMP version 7.0.2 (SAS Institute, Inc, Cary, NC). P<0.05 was considered statistically significant.
The hospital quarterly observations from 25 hospitals are based on a total of 3562 completed patient satisfaction surveys (average number of surveys/observation=18) and clinical data on 6467 patients in the CRUSADE registry (average number of patients/observation=32). Table 1 shows the diversity of our hospital sample on 4 different dimensions, including academic affiliation, size, geography, and structural resources. We have also included the total population of CRUSADE hospitals and CRUSADE patients for comparison. Overall our study population has similar characteristics. The median number of quarters per hospital in our final dataset was 8 (interquartile range, 2 to 20), and the median number of patients surveyed per hospital quarter was 18 (interquartile range, 4 to 51).
Table 2 shows the variation of quarterly hospital-level guideline adherence scores and risk-adjusted inpatient mortality for AMI. Table 3 displays the median and interquartile quarterly hospital-level patient satisfaction scores for cardiac admissions for each of the 9 dimensions, as well as the overall satisfaction measure. As can be seen from these tables, there is substantial diversity in our sample of hospitals and scores. Moreover, there is more variation among the clinical scores than patient satisfaction scores.
Table 4 reports the correlations between the quarterly hospital-level patient overall satisfaction scores for cardiac admissions and adherence to the 14 quality measures. Overall satisfaction was positively correlated with 13 of these 14 measures, although only 4 measures were significant at the P=0.05 level. However, at a more aggregate level, we found that patient satisfaction was significantly and positively correlated with the acute, discharge, and overall composite clinical measures. In addition, higher satisfaction scores were associated with lower risk-adjusted inpatient mortality rates (R=−0.216, P=0.002).
The regression associated with the Durbin-Wu-Hausman analysis was significant at the P=0.01 level. More importantly, the coefficient on the residual variable was not significant (P=0.29). This indicates that the patient overall satisfaction score is not correlated with any omitted fixed hospital effects and thus is not biased by not including fixed hospital effects in our analyses.
Table 5 presents the logistic regression estimates for both the univariate and multivariate analyses when the dependent variable is (1, risk-adjusted mortality), for example, survival. As can be seen from these results, both the overall clinical performance score and the patient overall satisfaction score for cardiac admissions are significantly and positively associated with survival for AMI even after controlling for the other factor, with probability values of 0.001 and 0.025, respectively.
To better interpret the managerial significance of these results, we performed sensitivity analyses to determine the change in predicted survival associated with 1-quartile changes in either patient satisfaction score, while keeping the clinical composite score fixed or the converse. Each 1-quartile change was made in reference to the previous quartile (ie, 0 to 25, 25 to 50, 50 to 75, and 75 to 100). One-quartile changes in patient satisfaction scores were associated with higher risk-adjusted survival over all 4 quartiles of change (odds ratio, 1.87, 1.09, 1.09, 1.24, respectively; all P<0.05) (Figure). One-quartile changes in patient satisfaction scores produced very similar increases in predicted survival compared with 1-quartile changes in composite guideline adherence scores. For example, a 1-quartile change (75th to 100th) in either the patient satisfaction score or the guideline adherence score yielded the same change in predicted survival (odds ratio, 1.24). As might be expected, larger changes in survival were observed from moving from the lowest scoring hospital to the 25% percentile and from the 75% percentile to the highest scoring hospital. Also, changes in clinical performance had more impact in hospitals below the median, whereas little to no differences between the 2 scores were observed in terms of changes in survival for hospitals above the median.
Table 6 presents the WLS results in which the independent measures are the average quarterly scores from the patients’ evaluations of the 9 different dimensions of their hospital experience and the dependent variable is the quarterly patient overall satisfaction score. Significant predictors of patient satisfaction, in descending order, were nursing care, physicians, personal issues, the admission process, and visitors and family.
There was no significant difference in the coefficients obtained for the relationship of overall satisfaction and the 9 different dimensions of patient satisfaction between our study population and the 262 hospital quarters of patient data that were excluded because we did not have equivalent hospital quarter clinical data (Chow test: [F(10,443)]=0.548; P=0.85), nor was there any difference in the coefficients obtained for the regression between mortality and hospital-level clinical performance between our study population and the excluded sample of 6082 hospital quarters for those CRUSADE hospitals for which we did not have matched patient satisfaction data (Wald χ2=0.96; P=0.99). These findings suggest that our results generalize to at least the population of excluded hospital quarters.
The Institute of Medicine has identified patient-centered care, or care that is “respectful of and responsive to individual patient preferences, needs, and values and ensures that patient values guide all clinical decisions,” as a key quality domain.17 Consistent with this notion, when we controlled for a hospital’s clinical performance, higher hospital-level patient satisfaction scores were independently associated with lower hospital inpatient mortality rates. This suggests that patients’ assessment of their care provides important and valid information to consumers and hospital managers about the overall quality of hospital care beyond clinical process measures. We believe this finding is new to the literature and has important implications not only for how to measure quality but also how to manage it.
To our knowledge, this is the first study to evaluate the association between patient satisfaction and mortality after adjusting for clinical quality. Jha et al,18 using data from 2429 hospitals reporting CMS-obtained patient satisfaction data for the year 2007, found a strong positive correlation between patient overall satisfaction and clinical performance. Our study confirms and extends these findings, and we found that patient satisfaction was an independent predictor of risk-adjusted inpatient mortality. Jaipaul and Rosenthal19 previously reported a negative correlation between patient overall satisfaction and unadjusted mortality rates in a study of 29 hospitals in Northeast Ohio. That study, however, was limited to a cohort of hospitals in a small geographic area and did not adjust for clinical quality or patient risk factors when evaluating the relationship between patient satisfaction and outcomes.
To gain deeper insights into what experiences patients were using when responding to the overall satisfaction questions, we found that hospitals that score high on questions such as “skill of nurses (physician),” “how well the nurses (physician) kept you informed,” “amount of attention paid to your special or personal needs,” “how well your pain was controlled,” “the degree to which the hospital staff addressed your emotional needs,” “physician’s concern for your questions and worries,” “time physician spent with you,” and “staff efforts to include you in decisions about your treatment” also tended to score high on patient overall satisfaction. In contrast, there was no association with scoring high on questions concerned with the room (eg, “room temperature and pleasantness of room décor”), meals (eg, “quality of food, temperature of food”), tests (eg, “waiting time for tests or treatment”), and discharge (eg, “speed of discharge process”) and the patient overall satisfaction score. Moreover, patient satisfaction with nursing care was the most important determinant of patient overall satisfaction, thus highlighting an important area for further quality improvement efforts and underscoring the role of the entire health care team in the in-hospital treatment of patients with AMI.
We believe these results have implications for measuring and managing the quality of medical care. First, these results give support to the premise that patients are a credible source of valid information when assessing and managing the quality of medical care and that this information represents a different view of quality than a hospital’s adherence to clinical performance measures. Second, this source of information should be very useful in helping managers identify ways to improve the overall quality level of the hospital. Our results imply that the association of changes in patient satisfaction with mortality was almost as large as those associated with changes in process performance.
Our findings also imply that increasing patient overall satisfaction will require attention to specific aspects of the patient’s experience. Thus, patients seem to differentiate between the technical and nontechnical aspects of medical care. Consistent with this observation, early invasive management (catheterization) was the clinical practice guideline most strongly associated with patient satisfaction and has previously been associated with a lower risk of inpatient mortality.20 Consequently, increasing the patient overall satisfaction score is less about making the patients “happy” (eg, improving the food, room decor, etc) and more about increasing the quality of care and the interactions between the patients and staff, particularly the nurses and the physician.
Our results also highlight that the quality of care includes actions other than those measured by clinical performance measures. This is particularly true for actions associated with nurses, an area that is not well captured by current clinical performance measures.21 In this study, the largest independent predictor of patient overall satisfaction was patient satisfaction with nursing care. A growing body of evidence supports a robust relationship between the quality of nursing care and patient safety and outcomes,22,23 and continued efforts are needed to measure and improve the quality of nursing care.24 We surmise that it may be efficient to capture specific aspects of patient satisfaction with nursing care (eg, quality of discharge planning) by asking patients for feedback. A similar process could be used to assess the quality of discharge planning in an effort to reduce readmission rates and outpatient mortality.25 These applications highlight the potential value of patient satisfaction data, not only to provide consumers with more information about patient experiences, but also to help managers evaluate hospital actions aimed at improving the quality of care.
The present study has several potential limitations. First, our sample was limited to hospitals that participated in CRUSADE and collected patient satisfaction data. This sample, however, included a diverse group of hospitals with respect to size, academic affiliation, and geography but was biased toward hospitals with full invasive and revascularization capabilities; thus, our results may not be generalizable to hospitals without revascularization capabilities. In addition, although one could argue that these hospitals have higher motivation for quality improvement than the average hospital via their participation in CRUSADE, we do not have a plausible explanation for why the interrelationship between quality, satisfaction, and outcomes is fundamentally different in these hospitals in comparison with a national cohort.
Second, although our study population is smaller than some previously published reports of patient satisfaction,18 a smaller sample should actually bias against finding a significant association between satisfaction and outcomes. Moreover, as discussed above, whenever we were able to compare our results with larger samples of Press Ganey and CRUSADE hospitals, we found a strong correspondence. Similarly, our univariate results are similar to those reported elsewhere.19 We take these findings to suggest that our sample is representative of a more general population of hospitals and that although our sample sizes are not large, our findings are not caused by random error.
Third, our study is limited to AMI, so the results are not necessarily generalizable to other medical or surgical conditions. Fourth, there is potentially an issue with censored sample bias because we obviously could not obtain patient satisfaction data for patients who died. This phenomenon, however, actually created a bias against finding an association between hospital satisfaction and hospital outcomes.
Finally, it is important to note that by testing for endogeneity, we are able to address the possibility that patient satisfaction scores are related to some fixed hospital effect such as managerial competence or hospital facilities, and it is this (unobserved) fixed effect that is affecting mortality and not patient satisfaction scores. In addition, when we performed models that included hospital structural characteristics (eg, size, academic affiliation, geography, cardiology services), we obtained nearly identical results. Our results provide us with assurance that we probably are observing the true association between patient satisfaction and mortality rather than an association occurring as the result of other unmeasured factors.
Higher patient satisfaction is associated with lower inpatient mortality rates even after controlling for performance guideline adherence, suggesting that patients are good discriminators of the type of care they receive. Thus, patients’ satisfaction with their care provides important incremental information on the quality of their care and care providers.
Sources of Funding
CRUSADE is funded by the Schering-Plough Corporation. Bristol-Myers Squibb/Sanofi-Aventis Pharmaceuticals Partnership provides additional funding support. Millennium Pharmaceuticals, Inc, also funded this work. There was no direct funding for this analysis. Dr Glickman is supported by a Physician Faculty Scholar award from the Robert Wood Johnson Foundation.
Drs Roe, Ohman, Peterson, and Schulman have made available detailed listings of disclosure information at: http://www.dcri.duke.edu/research/coi.jsp. No other authors reported financial disclosures. All analyses were performed independently at Duke University. Press Ganey had no direct role in the data analysis or drafting of the manuscript.
The online-only Data Supplement is available at http://circoutcomes.ahajournals.org/cgi/content/full/CIRCOUTCOMES.109.900597/DC1.
Press I. Patient Satisfaction: Understanding and Managing the Experience of Care. 2nd ed. Ann Arbor, Mich: Health Administration Press; 2006.
HCAHPS (Hospital Consumer Assessment of Healthcare Providers and Systems) facts. Centers for Medicare and Medicaid Services Web site. Available at: http://www.cms.hhs.gov/apps/media/press/factsheet. asp?Counter=3007&intNumPerPage=10&checkDate=&checkKey=&srch Type=1&numDays=3500&srchOpt=0&srchData=&keywordType=All&chkNewsType=6&intPage=&showAll=&pYear=&year=&desc=false&cboOrder=date. Updated March, 28, 2008. Accessed March 23, 2009.
Staman KL, Roe MT, Fraulo ES, Lytle BL, Gibler WB, Ohman EM, Peterson ED. Quality improvement tools designed to improve adherence to ACC/AHA guidelines for the care of patients with non–ST-segment acute coronary syndromes: the CRUSADE quality improvement initiative. Crit Pathw Cardiol. 2003; 2: 34–40.
Shah BR, Glickman SW, Liang L, Gibler WB, Ohman EM, Pollack CV Jr, Roe MT, Peterson ED. The impact of for-profit hospital status on the care and outcomes of patients with non–ST-segment elevation myocardial infarction: results from the CRUSADE Initiative. J Am Coll Cardiol. 2007; 50: 1462–1468.
Hoekstra JW, Pollack CV Jr, Roe MT, Peterson ED, Brindis R, Harrington RA, Christenson RH, Smith SC, Ohman EM, Gibler WB. Improving the care of patients with non–ST-elevation acute coronary syndromes in the emergency department: the CRUSADE initiative, Acad Emerg Med. 2002; 9: 1146–1155.
Premier Inc. Centers for Medicare and Medicaid Services (CMS)/Premier Hospital Quality Improvement Demonstration (HQID) project: findings from year two. Available at: http://www.premierinc.com/quality-safety/tools-services/p4p/hqi/resources/hqi-whitepaper-year2.pdf. Accessed November 18, 2007.
Boersma E, Pieper KS, Steyerberg EW, Wilcox RG, Chang WC, Lee KL, Akkerhuis KM, Harrington RA, Deckers JW, Armstrong PW, Lincoff AM, Califf RM, Topol EJ, Simoons ML. Predictors of outcome in patients with acute coronary syndromes without persistent ST-segment elevation: results from an international trial of 9461 patients. Circulation. 2000; 101: 2557–2567.
Davidson R, MacKinnon JG. Estimation and Inference in Econometrics. New York: Oxford University Press; 1993.
Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press; 2001.
Jaipaul CK, Rosenthal GE. Do hospitals with lower mortality have higher patient satisfaction? A regional analysis of patients with medical diagnoses. Am J Med Qual. 2003; 18: 59–65.
Bhatt DL, Roe MT, Peterson ED, Li Y, Chen AY, Harrington RA, Greenbaum AB, Berger PB, Cannon CP, Cohen DJ, Gibson CM, Saucedo JF, Kleiman NS, Hochman JS, Boden WE, Brindis RG, Peacock WF, Smith SC Jr, Pollack CV Jr, Gibler WB, Ohman EM. CRUSADE Investigators. Utilization of early invasive management strategies for high-risk patients with non–ST-segment elevation acute coronary syndromes: results from the CRUSADE Quality Improvement Initiative. JAMA. 2004; 292: 2096–2104.
Kurtzman ET, Dawson EM, Johnson JE. The current state of nursing performance measurement, public reporting, and value-based purchasing. Policy Polit Nurs Pract. 2008; 9: 181–191.
Needleman J, Kurtzman ET, Kizer KW. Performance measurement of nursing care: state of the science and the current consensus. Med Care Res Rev. 2007; 64 (2 Suppl): 10S–43S.
Kurtzman ET, Corrigan JM. Measuring the contribution of nursing to quality, patient safety, and health care outcomes. Policy Polit Nurs Pract. 2007; 8: 20–36.