0
We're unable to sign you in at this time. Please try again in a few minutes.
Retry
We were able to sign you in, but your subscription(s) could not be found. Please try again in a few minutes.
Retry
There may be a problem with your account. Please contact the AMA Service Center to resolve this issue.
Contact the AMA Service Center:
Telephone: 1 (800) 262-2350 or 1 (312) 670-7827  *   Email: subscriptions@jamanetwork.com
Error Message ......
Original Investigation |

Effectiveness of Nonpublic Report Cards for Reducing Trauma Mortality FREE

Laurent G. Glance, MD1; Turner M. Osler, MD2; Dana B. Mukamel, PhD3; J. Wayne Meredith, MD4; Andrew W. Dick, PhD5
[+] Author Affiliations
1Department of Anesthesiology, University of Rochester School of Medicine, Rochester, New York
2Department of Surgery, University of Vermont Medical College, Colchester
3Center for Health Policy Research, Department of Medicine, University of California, Irvine
4Department of Surgery, Wake Forest University School of Medicine, Winston-Salem, North Carolina
5RAND, RAND Health, Boston, Massachusetts
JAMA Surg. 2014;149(2):137-143. doi:10.1001/jamasurg.2013.3977.
Text Size: A A A
Published online

Importance  An Institute of Medicine report on patient safety that cited medical errors as the 8th leading cause of death fueled demand to use quality measurement as a catalyst for improving health care quality.

Objective  To determine whether providing hospitals with benchmarking information on their risk-adjusted trauma mortality outcomes will decrease mortality in trauma patients.

Design, Setting, and Participants  Hospitals were provided confidential reports of their trauma risk–adjusted mortality rates using data from the National Trauma Data Bank. Regression discontinuity modeling was used to examine the impact of nonpublic reporting on in-hospital mortality in a cohort of 326 206 trauma patients admitted to 44 hospitals, controlling for injury severity, patient case mix, hospital effects, and preexisting time trends.

Main Outcomes and Measures  In-hospital mortality rates.

Results  Performance benchmarking was not significantly associated with lower in-hospital mortality (adjusted odds ratio [AOR], 0.89; 95% CI, 0.68-1.16; P = .39). Similar results were obtained in secondary analyses after stratifying patients by mechanism of trauma: blunt trauma (AOR, 0.91; 95% CI, 0.69-1.20; P = .51) and penetrating trauma (AOR, 0.75; 95% CI, 0.44-1.28; P = .29). We also did not find a significant association between nonpublic reporting and in-hospital mortality in either low-risk (AOR, 0.84; 95% CI, 0.57-1.25; P = .40) or high-risk (AOR, 0.88; 95% CI, 0.67-1.17; P = .38) patients.

Conclusions and Relevance  Nonpublic reporting of hospital risk-adjusted mortality rates does not lead to improved trauma mortality outcomes. The findings of this study may prove useful to the American College of Surgeons as it moves ahead to further develop and expand its national trauma benchmarking program.

Figures in this Article

The release of the influential Institute of Medicine report on patient safety,1 citing medical errors as the 8th leading cause of death, has fueled the demand to use quality measurement as a catalyst for improving health care quality. Efforts by the Veterans Administration (VA),2 New York state,3 and hospitals in Northern New England4 showed that nonpublic reporting was associated with significant reductions in mortality and morbidity in patients undergoing cardiac and noncardiac surgery. More recently, the American College of Surgeons5 and the Society of Thoracic Surgeons6 have spearheaded national efforts to improve patient outcomes using performance benchmarking. The Centers for Medicare and Medicaid Services publicly reports mortality rates for patients hospitalized with acute myocardial infarctions, heart failure, and pneumonia,7 and it is expanding these efforts to include many other areas of health care.8 The need to control runaway health care spending has further magnified the need for quality measurement to ensure that health care quality is not sacrificed to save health care dollars.

Twenty-five years ago, the American College of Surgeons (ACS) created the National Trauma Databank (NTDB) as a “foundation for evidence-based practice, performance improvement, and research.”9 At its inception, this registry was not used to provide participating hospitals with information on their risk-adjusted outcomes. There is now mounting evidence that patient outcomes following traumatic injury are determined not only by the dose of trauma, but also by the hospital where the patient is treated.1012 There is remarkable variability in trauma mortality outcomes across hospitals, with up to 4-fold differences in risk-adjusted mortality rates between the best- and worst-performing hospitals.10,13 This quality gap presents an opportunity to improve trauma outcomes using performance feedback to bridge the divide between lower-performance and higher-performance hospitals.

With funding from the Agency for Healthcare Research and Quality, and in collaboration with the ACS, we conducted a prospective trial to test whether nonpublic reporting leads to lower trauma mortality. Participating hospitals were provided with detailed reports of their risk-adjusted mortality outcomes. We designed this study to examine the feasibility and impact of using the data infrastructure in the NTDB to improve trauma outcomes using nonpublic report cards. In our analysis examining the impact of nonpublic reporting, we controlled for temporal trends and hospital effects, in addition to controlling for patient case mix and injury severity. The objective of this article was to report the findings of this trial.

Data Source and Study Population

The University of Rochester School of Medicine institutional review board approved this study; the need for written informed patient consent was waived. This study was designed to determine whether nonpublic reporting leads to a reduction in in-hospital mortality in injured patients using data from the NTDB. The NTDB was created by the ACS to serve as a national repository for trauma center data.14 The NTDB includes the following data elements: patient demographics, hospital demographics, International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnostic and injury codes, encrypted hospital identifiers, physiology values, and in-hospital mortality.

National Trauma Databank coding practices dictate how missing data, invalid data, and inconsistent data are handled once the data have been transmitted to the NTDB. Data reports submitted by individual hospitals are checked by the NTDB using software edit tools.15 Internal consistency is assessed by comparing the values for related data elements. For example, the intensive care unit length of stay must be less than the total hospital stay. Each hospital submitting data to the NTDB is given a screening report and has the opportunity to resubmit their data after correcting their errors.16

At the inception of this trial, we identified 2 separate hospital cohorts depending on whether hospitals were primarily using either ICD-9-CM codes or Abbreviated Injury Scale (AIS) codes to code patient injuries.10 We developed and validated 2 versions of the Trauma Mortality Prediction Model (TMPM)—one based on AIS injury codes17 and the other based on ICD-9-CM codes18—whose statistical performance was superior to existing standard injury models, Injury Severity Score (ISS) and ICISS (an International Classification of Diseases, Ninth Revision–based injury severity score). In 2008, the ACS National Trauma Data Standard was revised to mandate the use of ICD-9-CM codes to characterize injury severity and made AIS injury codes optional.19 In 2008, we sent hospitals report cards based on either TMPM–ICD-9 or TMPM-AIS (using 2006 data), depending on whether they coded injury data using either ICD-9-CM or AIS codes. Starting in 2009 and then in 2010, participating hospitals received annual report cards based on TMPM–ICD-9 because of the mandated change in coding practices. Because TMPM–ICD-9 and TMPM-AIS are based on 2 completely different sets of injury codes—ICD-9-CM and AIS codes—we limited our analysis to those hospitals that coded injuries using ICD-9-CM codes and received report cards based on TMPM–ICD-9 in 2008-2010 to avoid confounding the intervention effect (nonpublic reporting starting in 2008) with the changeover in injury coding (starting in 2007). Each hospital was provided with benchmarking information on their risk-adjusted in-hospital mortality for their entire patient cohort, as well as separate reports stratified by mechanism of trauma (blunt, gunshot wound, motor vehicle crash, pedestrian, and high risk).10 A sample report card is shown in the eAppendix (Supplement).

After excluding patients with burns, unspecified injuries, nontraumatic injuries, or missing mechanisms of injury, as well as patients who were dead on admission or transferred out to another hospital, our study sample included 330 700 patients in 44 hospitals (Figure 1). We excluded patients with missing demographic data, invalid ICD-9-CM codes, missing empirical injury severities (MARC values), and missing ICD-9-CM codes. The final analytic sample consisted of 326 206 patients admitted to 44 hospitals (Figure 1).

Place holder to copy figure label and caption
Figure 1.
Diagram Illustrating Selection of Patients Included in the Data Analysis

ICD-9 indicates International Classification of Diseases, Ninth Revision; MARC, empirical injury severities.

Graphic Jump Location
Statistical Analysis

The aim of this analysis was to determine whether the introduction of nonpublic reporting was associated with lower in-hospital mortality after controlling for patient and hospital factors, as well as controlling for possible temporal trends toward lower mortality rates during the study. This analysis was performed using regression discontinuity modeling,20,21 which is an econometric technique that identifies the effect of an intervention (introduction of nonpublic reporting) in a prestudy and poststudy by estimating the intercept shift, controlling for patient and hospital factors and for preexisting temporal trends.

To perform regression discontinuity analysis, we estimated a patient-level logistic regression model to examine the association between in-hospital mortality and the initiation of nonpublic reporting. A dummy variable was used to indicate whether a patient was admitted before or after nonpublic reporting was initiated. We controlled for secular trends by including year of admission as a categorical variable, omitting data from 2008 (when report cards were initiated). We also included an interaction term between report card and year to examine whether the slope of the time trend changed after initiation of reporting. We controlled for patient risk factors using an enhanced version of TMPM–ICD-9: patient age, sex, injury severity of the 5 most severe injuries, transfer status, mechanism of injury (ie, blunt, gunshot wound, motor vehicle crash, stab injury, pedestrian, and low fall), motor component of the Glasgow Coma Scale, and systolic blood pressure.18 We also controlled for hospital-fixed effects by including a separate indicator variable for each hospital. By including hospital-fixed effects, we were able to identify whether nonpublic reporting led to a mortality reduction within hospitals by allowing each hospital to act as its own control. Missing values of the motor component of the Glasgow Coma Scale and systolic blood pressure were imputed using the Stata implementation of the multiple imputation by chained equations method described by van Buuren et al.22 Fractional polynomial analysis was used to determine the optimal specification of age.23

Several sensitivity analyses were performed to examine the robustness of the analysis of the impact of nonpublic reporting. We performed stratified analyses in which we limited the patient cohort to patients with either blunt or penetrating trauma and to either low-risk (predicted probability of death<5%1) or high-risk (predicted probability of death ≥5%) patients. We also performed stratified analyses in which we only included low-performance, average-performance, or high-performance hospitals. Hospital performance was estimated in 2006 data using hierarchical logistic regression, based on TMPM–ICD-9, in which hospitals were specified as a random effect. The empirical Bayes estimate of the hospital effect was exponentiated to yield an adjusted odds ratio (AOR).24 Hospitals whose AORs were significantly lower than 1 were classified as high-performance outliers, whereas hospitals with AORs significantly greater than 1 were classified as low-performance outliers.

Data management and statistical analyses were performed using Stata SE/MP version 11.0 (StataCorp). Robust variance estimators were used because patient observations were clustered by hospital.25 All statistical tests were 2-tailed and P values less than .05 were considered significant.

Patient characteristics before (2006-2007) and after (2008-2010) initiation of nonpublic reporting are shown in eTable 1 in the Supplement. Overall, the median age was 40 years, most patients were male (66%), 24% were transferred in from other hospitals, and most patients sustained injuries from either blunt trauma (42%) or motor vehicle crashes (23%). The observed mortality rate was 4.19%. No clinically significant changes in patient case mix were detected over the 5-year study. Hospital characteristics are shown in eTable 2 in the Supplement. Most hospitals were either level I (43%) or level II (43%) trauma centers, nearly half were university hospitals, and nearly all were nonprofit. Most hospitals had between 200 and 400 beds (39%) or greater than 400 beds (54%). All geographic regions in the United States were well represented: Northeast (16%), South (39%), Midwest (16%), and West (27%).

After controlling for patient characteristics, hospital factors, and preexisting time trends, nonpublic reporting did not have a significant impact on in-hospital mortality (AOR, 0.89; 95% CI, 0.68-1.16; P = .39) (Table 1 and Figure 2A). When we stratified patients by mechanism of trauma, we found similar findings: blunt trauma (AOR, 0.91; 95% CI, 0.69-1.20; P = .51) and penetrating trauma (AOR, 0.75; 95% CI, 0.44-1.28; P = .29) (Table 1 and Figure 2B). We also did not find a significant association between nonpublic reporting and in-hospital mortality in either low-risk (AOR, 0.84; 95% CI, 0.57-1.25; P = .40) or high-risk (AOR, 0.88; 95% CI, 0.67-1.17; P = .38) patients (Table 2 and Figure 2C and D).

Table Graphic Jump LocationTable 1.  In-Hospital Mortality Before and After Implementation of Nonpublic Reporting
Place holder to copy figure label and caption
Figure 2.
Trends in Adjusted Mortality Rates

The graphs show the trends in the adjusted mortality rates for all patients (A), patients with blunt or penetrating trauma (B), low-risk patients (predicted probability of death <5%) (C), high-risk patients (predicted probability of death ≥ 5%) (D), and hospitals stratified by performance strata, adjusting for patient risk factors (E). Nonpublic reporting was initiated in 2008.

Graphic Jump Location
Table Graphic Jump LocationTable 2.  In-Hospital Mortality Before and After Implementation of Nonpublic Reporting

We conducted a final sensitivity analysis in which we stratified our sample according to whether patients were treated at low-, average-, and high-performance hospitals. Similar to all the other analyses, we found that nonpublic reporting was not associated with improved outcomes in patients treated in low- (AOR, 1.17; 95% CI, 0.65-1.23; P = .61), average- (AOR, 0.89; 95% CI, 0.65-1.23; P = .49), or high-performance (AOR, 0.74; 95% CI, 0.34-1.57; P = .43) hospitals (Table 3 and Figure 2E).

Table Graphic Jump LocationTable 3.  In-Hospital Mortality Before and After Implementation of Nonpublic Reporting

We did not find significant evidence that providing hospitals with nonpublic reports of their risk-adjusted trauma mortality rates was associated with improvement in trauma mortality, even after controlling for patient case mix and preexisting time trends. We found similar findings when we limited our analysis to patients with either blunt trauma, penetrating trauma, or either a low risk for death or high risk for death. We also did not find evidence that nonpublic reporting had a differential effect depending on whether a hospital was a low-, average-, or high-performance hospital.

There are historical precedents to suggest that nonpublic reporting could be expected to lead to improved outcomes. The VA National Surgical Quality Improvement Program (NSQIP) was established by congressional mandate in response to concerns about the quality of surgical care in VA hospitals.26 The VA prospectively collected clinical data on patient risk and outcomes and provided VA hospitals with confidential risk-adjusted comparative outcomes information. Between 1994 and 2004, unadjusted surgical mortality rates decreased by 37% and complication rates by 42%.26 Under the leadership of the ACS, the NSQIP was expanded to include hospitals outside of the VA and currently includes more than 400 non-VA hospital sites.27 Hall and colleagues5 reported that adjusted mortality rates for a constant patient population hypothetically undergoing surgery in ACS NSQIP hospitals in 2005, 2006, and 2007 improved by 26% over a 2-year period and that complications decreased by 13% over the same period.

More recent evidence suggests that hospital benchmarking may be less effective than was originally believed. Medicare’s Hospital Compare, the largest and most ambitious public reporting initiative to date, publicly reports risk-adjusted mortality for all Medicare patients with acute myocardial infarction, heart failure, and pneumonia since 2005. Controlling for preexisting time trends in mortality, there is no evidence that this public reporting initiative resulted in a mortality reduction for acute myocardial infarction and pneumonia, and only a modest 3% relative risk reduction for heart failure.7

The question arises as to why we found no association between the SMARTT (Survival Measurement and Reporting Trial for Trauma) reporting initiative and mortality. It is possible that performance benchmarking is not as effective as once believed. Although the early results from the VA NSQIP were very impressive, the effect of the ACS NSQIP on mortality outcomes were less dramatic and did not account for the fact that the benefit observed with nonpublic reporting may have been owing instead to preexisting temporal trends and unrelated to performance feedback. Furthermore, the negative findings of the more methodologically rigorous and larger Hospital Compare study in nonsurgical patients stand in sharp contrast to the earlier NSQIP studies.

It is also possible that benchmarking alone is not sufficient and that reporting initiatives need to be tied to financial incentives. However, evaluations of the Premier and Centers for Medicare and Medicaid Services Hospital Quality Incentive Demonstration program did not find that the program was associated with decreases in the mortality rates for patients hospitalized with acute myocardial infarction, heart failure, or pneumonia.28,29 Finally, it is also possible that providing hospitals with benchmarking information, with or without financial incentives, is not enough to improve outcomes without also providing hospitals with information on how to improve patient outcomes. In the VA, hospitals have the option of inviting the NSQIP to conduct consultative structured site visits, and best practices from high-performance hospitals are disseminated to all hospitals in the VA and ACS NSQIP.5,30 However, the evidence that improved adherence to best practices is associated with better outcomes is relatively modest in both surgical and nonsurgical patients.31,32

Our study has several potential limitations. First, we did not know to what extent participating hospitals actually used the benchmarking information to guide their quality improvement efforts. Second, our hospital study cohort was limited to a subset of hospitals in the NTDB with a low incidence of missing data and to hospitals that used ICD-9-CM codes throughout the study period; therefore, it is not necessarily representative of all hospitals caring for trauma patients. Although we found no evidence of important and sustained effects, a larger hospital sample would have provided greater statistical precision. Third, we used ICD-9-CM diagnostic codes as the basis for injury coding, as opposed to clinical AIS injury codes, because of the change in the national standard for trauma coding that occurred during the study. However, the ICD-9–based trauma mortality prediction model used to produce the quality reports in our analyses has been previously validated and shown to have excellent statistical properties.18 Fourth, because the NTDB does not have reliable comorbidity data, we did not include comorbidities in our benchmarking reports, nor did we include them in our analyses. However, we have previously shown, using all-payer administrative data, that omitting comorbidity data from risk adjustment does not have a substantial impact on hospital quality measurement in trauma.13 Finally, our analysis was limited to in-hospital mortality and did not examine complications, functional outcomes, readmissions, or long-term outcomes because of limitations of the NTDB.

Our findings have potentially important implications for the ACS as it continues to expand the ACS Trauma Quality Improvement Program (TQIP). This national trauma benchmarking program grew out of a pilot study of 23 centers in 2008 to include more than 160 trauma centers.33,34 The TQIP is modeled after the ACS NSQIP and assumes that TQIP benchmarking reports will serve as a catalyst for performance improvement and lead to better patient outcomes. Our findings point to the limitations of performance feedback as a mechanism for improving trauma outcomes. However, current efforts to shift financial risk to hospital and physician groups under the Affordable Health Care Act35 may provide very strong incentives for hospitals to decrease costly complications and improve patient outcomes. As we move toward a future health care model where hospitals and physicians are increasingly held accountable for patient and financial outcomes, detailed benchmarking reports from the ACS TQIP may provide critical information that trauma centers can use to identify and more effectively target quality problems.

In summary, our study suggests that nonpublic reporting of hospital risk-adjusted mortality rates does not lead to improved trauma mortality outcomes. It is possible that adding other interventions to performance feedback, such as structured site visits, efforts to identify and disseminate best practices, and meaningful financial incentives, will lead to improved trauma outcomes. The findings of this study may prove useful to the ACS leadership as it moves ahead to further develop and expand its national trauma benchmarking program. It may be possible for the ACS TQIP, using a much larger hospital sample, to reexamine the value of nonpublic reporting, possibly in combination with other quality-improvement strategies.

Corresponding Author: Laurent G. Glance, MD, Department of Anesthesiology, University of Rochester Medical Center, 601 Elmwood Ave, PO Box 604, Rochester, NY 14642 (laurent_glance@urmc.rochester.edu).

Accepted for Publication: April 19, 2013.

Published Online: December 11, 2013. doi:10.1001/jamasurg.2013.3977.

Author Contributions: Dr Glance had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Study concept and design: All authors.

Acquisition of data: Glance.

Analysis and interpretation of data: Glance, Osler, Mukamel, Dick.

Drafting of the manuscript: Glance.

Critical revision of the manuscript for important intellectual content: All authors.

Statistical analysis: Glance, Osler, Mukamel, Dick.

Obtained funding: Glance, Mukamel.

Administrative, technical, or material support: Glance, Meredith.

Study supervision: Glance.

Conflict of Interest Disclosures: None reported.

Funding/Support: This project was supported by grant RO1 HS 16737 from the Agency for Healthcare Research and Quality.

Role of the Sponsor: The Agency for Healthcare Research and Quality had no role in the design and conduct of the study; collection, management, analysis, or interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Disclaimer: The views presented in this manuscript are those of the authors and may not reflect those of the Agency for Healthcare Research and Quality. These data were obtained from the American College of Surgeons National Trauma Data Bank, which is not responsible for any analyses, interpretations, or conclusions.

Institute of Medicine. To Err is Human: Building a Safer Health System. Washington, DC: National Academy Press; 2000.
Khuri  SF, Daley  J, Henderson  W,  et al; National VA Surgical Quality Improvement Program.  The Department of Veterans Affairs’ NSQIP: the first national, validated, outcome-based, risk-adjusted, and peer-controlled program for the measurement and enhancement of the quality of surgical care. Ann Surg. 1998;228(4):491-507.
PubMed   |  Link to Article
Hannan  EL, Kilburn  H  Jr, Racz  M, Shields  E, Chassin  MR.  Improving the outcomes of coronary artery bypass surgery in New York State. JAMA. 1994;271(10):761-766.
PubMed   |  Link to Article
O’Connor  GT, Plume  SK, Olmstead  EM,  et al; The Northern New England Cardiovascular Disease Study Group.  A regional intervention to improve the hospital mortality associated with coronary artery bypass graft surgery. JAMA. 1996;275(11):841-846.
PubMed   |  Link to Article
Hall  BL, Hamilton  BH, Richards  K, Bilimoria  KY, Cohen  ME, Ko  CY.  Does surgical quality improve in the American College of Surgeons National Surgical Quality Improvement Program: an evaluation of all participating hospitals. Ann Surg. 2009;250(3):363-376.
PubMed
Shahian  DM, Edwards  FH, Jacobs  JP,  et al.  Public reporting of cardiac surgery performance, part 1: history, rationale, consequences. Ann Thorac Surg. 2011;92(3)(suppl):S2-S11.
PubMed   |  Link to Article
Ryan  AM, Nallamothu  BK, Dimick  JB.  Medicare’s public reporting initiative on hospital quality had modest or no impact on mortality from three key conditions. Health Aff (Millwood). 2012;31(3):585-592.
PubMed   |  Link to Article
VanLare  JM, Blum  JD, Conway  PH.  Linking performance with payment: implementing the Physician Value-Based Payment Modifier. JAMA. 2012;308(20):2089-2090.
PubMed   |  Link to Article
American College of Surgeons. National Trauma Data Bank Report 2002. Chicago, IL: American College of Surgeons; 2002.
Glance  LG, Osler  TM, Dick  AW, Mukamel  DB, Meredith  W.  The Survival Measurement and Reporting Trial for Trauma (SMARTT): background and study design. J Trauma. 2010;68(6):1491-1497.
PubMed   |  Link to Article
MacKenzie  EJ, Rivara  FP, Jurkovich  GJ,  et al.  A national evaluation of the effect of trauma-center care on mortality. N Engl J Med. 2006;354(4):366-378.
PubMed   |  Link to Article
Shafi  S, Friese  R, Gentilello  LM.  Moving beyond personnel and process: a case for incorporating outcome measures in the trauma center designation process. Arch Surg. 2008;143(2):115-119, discussion 120.
PubMed   |  Link to Article
Glance  LG, Dick  AW, Mukamel  DB, Meredith  W, Osler  TM.  The effect of preexisting conditions on hospital quality measurement for injured patients. Ann Surg. 2010;251(4):728-734.
PubMed   |  Link to Article
American College of Surgeons. National Trauma Data Bank Report 2012.http://www.facs.org/trauma/ntdb/docpub.html.
American College of Surgeons. National Trauma Data Bank Reference Manual. Chicago, IL: American College of Surgeons; 2002.
American College of Surgeons. National Trauma Data Bank User Manual Research Data Set version 7.0. Chicago, IL: American College of Surgeons; 2008.
Osler  T, Glance  L, Buzas  JS, Mukamel  D, Wagner  J, Dick  A.  A trauma mortality prediction model based on the anatomic injury scale. Ann Surg. 2008;247(6):1041-1048.
PubMed   |  Link to Article
Glance  LG, Osler  TM, Mukamel  DB, Meredith  W, Wagner  J, Dick  AW.  TMPM-ICD9: a trauma mortality prediction model based on ICD-9-CM codes. Ann Surg. 2009;249(6):1032-1039.
PubMed   |  Link to Article
National Trauma Data Bank. National Trauma Data Standard data dictionary version 1.2. 2008; http://www.ntdsdictionary.org/. Accessed November 20, 2013.
Shadish  WRCT, Campbell  DT. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Independence, KY: Wadsworth Publishing; 2002.
Yörük  BK, Yörük  CE.  The impact of minimum legal drinking age laws on alcohol consumption, smoking, and marijuana use: evidence from a regression discontinuity design using exact date of birth. J Health Econ. 2011;30(4):740-752.
PubMed   |  Link to Article
van Buuren  S, Boshuizen  HC, Knook  DL.  Multiple imputation of missing blood pressure covariates in survival analysis. Stat Med. 1999;18(6):681-694.
PubMed   |  Link to Article
Royston  P, Altman  DG.  Regression using fractional polynomials of continuous covariates: parsimonious parametric modeling. Appl Stat. 1994;43:429-467. doi:10.2307/2986270.
Link to Article
DeLong  ER, Peterson  ED, DeLong  DM, Muhlbaier  LH, Hackett  S, Mark  DB.  Comparing risk-adjustment methods for provider profiling. Stat Med. 1997;16(23):2645-2664.
PubMed   |  Link to Article
White  H.  A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica. 1980;48:817-830. doi:10.2307/1912934.
Link to Article
Khuri  SF.  The NSQIP: a new frontier in surgery. Surgery. 2005;138(5):837-843.
PubMed   |  Link to Article
Maggard-Gibbons  M. Use of Report Cards and Outcome Measurements to Improve Safety of Surgical Care: American College of Surgeons National Quality Improvement Program. Making Health Care Safer II: An Updated Critical Analysis of the Evidence for Patient Safety Practices. Rockville, MD: Agency for Healthcare Research and Quality; 2013:140-156.
Glickman  SW, Ou  FS, DeLong  ER,  et al.  Pay for performance, quality of care, and outcomes in acute myocardial infarction. JAMA. 2007;297(21):2373-2380.
PubMed   |  Link to Article
Ryan  AM.  Effects of the Premier Hospital Quality Incentive Demonstration on Medicare patient mortality and cost. Health Serv Res. 2009;44(3):821-842.
PubMed   |  Link to Article
Khuri  SF, Daley  J, Henderson  WG.  The comparative assessment and improvement of quality of surgical care in the Department of Veterans Affairs. Arch Surg. 2002;137(1):20-27.
PubMed   |  Link to Article
Stulberg  JJ, Delaney  CP, Neuhauser  DV, Aron  DC, Fu  P, Koroukian  SM.  Adherence to surgical care improvement project measures and the association with postoperative infections. JAMA. 2010;303(24):2479-2485.
PubMed   |  Link to Article
Werner  RM, Bradlow  ET.  Public reporting on hospital process improvements is linked to better patient outcomes. Health Aff (Millwood). 2010;29(7):1319-1324.
PubMed   |  Link to Article
Shafi  S, Nathens  AB, Cryer  HG,  et al.  The Trauma Quality Improvement Program of the American College of Surgeons Committee on Trauma. J Am Coll Surg. 2009;209(4):521-530.e531.
PubMed   |  Link to Article
American College of Surgeons. Trauma Quality Improvement Program (TQIP). http://www.facs.org/trauma/ntdb/tqip.html. Accessed March 8, 2013.
Blumenthal  D, Dixon  J.  Health-care reforms in the USA and England: areas for useful learning. Lancet. 2012;380(9850):1352-1357.
PubMed   |  Link to Article

Figures

Place holder to copy figure label and caption
Figure 1.
Diagram Illustrating Selection of Patients Included in the Data Analysis

ICD-9 indicates International Classification of Diseases, Ninth Revision; MARC, empirical injury severities.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 2.
Trends in Adjusted Mortality Rates

The graphs show the trends in the adjusted mortality rates for all patients (A), patients with blunt or penetrating trauma (B), low-risk patients (predicted probability of death <5%) (C), high-risk patients (predicted probability of death ≥ 5%) (D), and hospitals stratified by performance strata, adjusting for patient risk factors (E). Nonpublic reporting was initiated in 2008.

Graphic Jump Location

Tables

Table Graphic Jump LocationTable 1.  In-Hospital Mortality Before and After Implementation of Nonpublic Reporting
Table Graphic Jump LocationTable 2.  In-Hospital Mortality Before and After Implementation of Nonpublic Reporting
Table Graphic Jump LocationTable 3.  In-Hospital Mortality Before and After Implementation of Nonpublic Reporting

References

Institute of Medicine. To Err is Human: Building a Safer Health System. Washington, DC: National Academy Press; 2000.
Khuri  SF, Daley  J, Henderson  W,  et al; National VA Surgical Quality Improvement Program.  The Department of Veterans Affairs’ NSQIP: the first national, validated, outcome-based, risk-adjusted, and peer-controlled program for the measurement and enhancement of the quality of surgical care. Ann Surg. 1998;228(4):491-507.
PubMed   |  Link to Article
Hannan  EL, Kilburn  H  Jr, Racz  M, Shields  E, Chassin  MR.  Improving the outcomes of coronary artery bypass surgery in New York State. JAMA. 1994;271(10):761-766.
PubMed   |  Link to Article
O’Connor  GT, Plume  SK, Olmstead  EM,  et al; The Northern New England Cardiovascular Disease Study Group.  A regional intervention to improve the hospital mortality associated with coronary artery bypass graft surgery. JAMA. 1996;275(11):841-846.
PubMed   |  Link to Article
Hall  BL, Hamilton  BH, Richards  K, Bilimoria  KY, Cohen  ME, Ko  CY.  Does surgical quality improve in the American College of Surgeons National Surgical Quality Improvement Program: an evaluation of all participating hospitals. Ann Surg. 2009;250(3):363-376.
PubMed
Shahian  DM, Edwards  FH, Jacobs  JP,  et al.  Public reporting of cardiac surgery performance, part 1: history, rationale, consequences. Ann Thorac Surg. 2011;92(3)(suppl):S2-S11.
PubMed   |  Link to Article
Ryan  AM, Nallamothu  BK, Dimick  JB.  Medicare’s public reporting initiative on hospital quality had modest or no impact on mortality from three key conditions. Health Aff (Millwood). 2012;31(3):585-592.
PubMed   |  Link to Article
VanLare  JM, Blum  JD, Conway  PH.  Linking performance with payment: implementing the Physician Value-Based Payment Modifier. JAMA. 2012;308(20):2089-2090.
PubMed   |  Link to Article
American College of Surgeons. National Trauma Data Bank Report 2002. Chicago, IL: American College of Surgeons; 2002.
Glance  LG, Osler  TM, Dick  AW, Mukamel  DB, Meredith  W.  The Survival Measurement and Reporting Trial for Trauma (SMARTT): background and study design. J Trauma. 2010;68(6):1491-1497.
PubMed   |  Link to Article
MacKenzie  EJ, Rivara  FP, Jurkovich  GJ,  et al.  A national evaluation of the effect of trauma-center care on mortality. N Engl J Med. 2006;354(4):366-378.
PubMed   |  Link to Article
Shafi  S, Friese  R, Gentilello  LM.  Moving beyond personnel and process: a case for incorporating outcome measures in the trauma center designation process. Arch Surg. 2008;143(2):115-119, discussion 120.
PubMed   |  Link to Article
Glance  LG, Dick  AW, Mukamel  DB, Meredith  W, Osler  TM.  The effect of preexisting conditions on hospital quality measurement for injured patients. Ann Surg. 2010;251(4):728-734.
PubMed   |  Link to Article
American College of Surgeons. National Trauma Data Bank Report 2012.http://www.facs.org/trauma/ntdb/docpub.html.
American College of Surgeons. National Trauma Data Bank Reference Manual. Chicago, IL: American College of Surgeons; 2002.
American College of Surgeons. National Trauma Data Bank User Manual Research Data Set version 7.0. Chicago, IL: American College of Surgeons; 2008.
Osler  T, Glance  L, Buzas  JS, Mukamel  D, Wagner  J, Dick  A.  A trauma mortality prediction model based on the anatomic injury scale. Ann Surg. 2008;247(6):1041-1048.
PubMed   |  Link to Article
Glance  LG, Osler  TM, Mukamel  DB, Meredith  W, Wagner  J, Dick  AW.  TMPM-ICD9: a trauma mortality prediction model based on ICD-9-CM codes. Ann Surg. 2009;249(6):1032-1039.
PubMed   |  Link to Article
National Trauma Data Bank. National Trauma Data Standard data dictionary version 1.2. 2008; http://www.ntdsdictionary.org/. Accessed November 20, 2013.
Shadish  WRCT, Campbell  DT. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Independence, KY: Wadsworth Publishing; 2002.
Yörük  BK, Yörük  CE.  The impact of minimum legal drinking age laws on alcohol consumption, smoking, and marijuana use: evidence from a regression discontinuity design using exact date of birth. J Health Econ. 2011;30(4):740-752.
PubMed   |  Link to Article
van Buuren  S, Boshuizen  HC, Knook  DL.  Multiple imputation of missing blood pressure covariates in survival analysis. Stat Med. 1999;18(6):681-694.
PubMed   |  Link to Article
Royston  P, Altman  DG.  Regression using fractional polynomials of continuous covariates: parsimonious parametric modeling. Appl Stat. 1994;43:429-467. doi:10.2307/2986270.
Link to Article
DeLong  ER, Peterson  ED, DeLong  DM, Muhlbaier  LH, Hackett  S, Mark  DB.  Comparing risk-adjustment methods for provider profiling. Stat Med. 1997;16(23):2645-2664.
PubMed   |  Link to Article
White  H.  A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica. 1980;48:817-830. doi:10.2307/1912934.
Link to Article
Khuri  SF.  The NSQIP: a new frontier in surgery. Surgery. 2005;138(5):837-843.
PubMed   |  Link to Article
Maggard-Gibbons  M. Use of Report Cards and Outcome Measurements to Improve Safety of Surgical Care: American College of Surgeons National Quality Improvement Program. Making Health Care Safer II: An Updated Critical Analysis of the Evidence for Patient Safety Practices. Rockville, MD: Agency for Healthcare Research and Quality; 2013:140-156.
Glickman  SW, Ou  FS, DeLong  ER,  et al.  Pay for performance, quality of care, and outcomes in acute myocardial infarction. JAMA. 2007;297(21):2373-2380.
PubMed   |  Link to Article
Ryan  AM.  Effects of the Premier Hospital Quality Incentive Demonstration on Medicare patient mortality and cost. Health Serv Res. 2009;44(3):821-842.
PubMed   |  Link to Article
Khuri  SF, Daley  J, Henderson  WG.  The comparative assessment and improvement of quality of surgical care in the Department of Veterans Affairs. Arch Surg. 2002;137(1):20-27.
PubMed   |  Link to Article
Stulberg  JJ, Delaney  CP, Neuhauser  DV, Aron  DC, Fu  P, Koroukian  SM.  Adherence to surgical care improvement project measures and the association with postoperative infections. JAMA. 2010;303(24):2479-2485.
PubMed   |  Link to Article
Werner  RM, Bradlow  ET.  Public reporting on hospital process improvements is linked to better patient outcomes. Health Aff (Millwood). 2010;29(7):1319-1324.
PubMed   |  Link to Article
Shafi  S, Nathens  AB, Cryer  HG,  et al.  The Trauma Quality Improvement Program of the American College of Surgeons Committee on Trauma. J Am Coll Surg. 2009;209(4):521-530.e531.
PubMed   |  Link to Article
American College of Surgeons. Trauma Quality Improvement Program (TQIP). http://www.facs.org/trauma/ntdb/tqip.html. Accessed March 8, 2013.
Blumenthal  D, Dixon  J.  Health-care reforms in the USA and England: areas for useful learning. Lancet. 2012;380(9850):1352-1357.
PubMed   |  Link to Article

Correspondence

CME
Also Meets CME requirements for:
Browse CME for all U.S. States
Accreditation Information
The American Medical Association is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The AMA designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 CreditTM per course. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Physicians who complete the CME course and score at least 80% correct on the quiz are eligible for AMA PRA Category 1 CreditTM.
Note: You must get at least of the answers correct to pass this quiz.
Please click the checkbox indicating that you have read the full article in order to submit your answers.
Your answers have been saved for later.
You have not filled in all the answers to complete this quiz
The following questions were not answered:
Sorry, you have unsuccessfully completed this CME quiz with a score of
The following questions were not answered correctly:
Commitment to Change (optional):
Indicate what change(s) you will implement in your practice, if any, based on this CME course.
Your quiz results:
The filled radio buttons indicate your responses. The preferred responses are highlighted
For CME Course: A Proposed Model for Initial Assessment and Management of Acute Heart Failure Syndromes
Indicate what changes(s) you will implement in your practice, if any, based on this CME course.
Submit a Comment

Multimedia

Supplement.

eAppendix. Sample Report Card

eTable 1. Summary Statistics by Year

eTable 2. Hospital Characteristics

Supplemental Content

Some tools below are only available to our subscribers or users with an online account.

1,186 Views
5 Citations

Related Content

Customize your page view by dragging & repositioning the boxes below.

See Also...
Articles Related By Topic
Related Collections
PubMed Articles
Jobs
JAMAevidence.com

The Rational Clinical Examination: Evidence-Based Clinical Diagnosis
Hemodynamically Stable Patients

The Rational Clinical Examination: Evidence-Based Clinical Diagnosis
Accuracy of Individual Findings From the Clinical History and Physical Examination

×