0
We're unable to sign you in at this time. Please try again in a few minutes.
Retry
We were able to sign you in, but your subscription(s) could not be found. Please try again in a few minutes.
Retry
There may be a problem with your account. Please contact the AMA Service Center to resolve this issue.
Contact the AMA Service Center:
Telephone: 1 (800) 262-2350 or 1 (312) 670-7827  *   Email: subscriptions@jamanetwork.com
Error Message ......
Original Investigation | Innovation in Safety: Safety in Innovation

A Comparison of 2 Surgical Site Infection Monitoring Systems FREE

Mila H. Ju, MD, MS1,2; Clifford Y. Ko, MD, MS, MSHS1,3,4; Bruce L. Hall, MD, PhD, MBA1,5,6,7,8; Charles L. Bosk, PhD9,10; Karl Y. Bilimoria, MD, MS1,2; Elizabeth C. Wick, MD11
[+] Author Affiliations
1Division of Research and Optimal Patient Care, American College of Surgeons, Chicago, Illinois
2Surgical Outcomes and Quality Improvement Center, Department of Surgery, Feinberg School of Medicine, Northwestern University, Northwestern Memorial Hospital, Chicago, Illinois
3Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles
4Veterans Affairs Greater Los Angeles Healthcare System, Los Angeles, California
5Department of Surgery, Washington University in St Louis, St Louis, Missouri
6Olin Business School and Center for Health Policy, Washington University in St Louis, St Louis, Missouri
7St Louis Veterans Affairs Medical Center, St Louis, Missouri
8BJC HealthCare, St Louis, Missouri
9Department of Sociology, Leonard Davis Institute of Health Economics, Philadelphia, Pennsylvania
10Department of Anesthesiology and Critical Care, University of Pennsylvania, Philadelphia
11Department of Surgery, The Johns Hopkins University, Baltimore, Maryland
JAMA Surg. 2015;150(1):51-57. doi:10.1001/jamasurg.2014.2891.
Text Size: A A A
Published online

Importance  Surgical site infection (SSI) has emerged as the leading publicly reported surgical outcome and is tied to payment determinations. Many hospitals monitor SSIs using the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP), in addition to mandatory participation (for most states) in the Centers for Disease Control and Prevention’s National Healthcare Safety Network (NHSN), which has resulted in duplication of effort and incongruent data.

Objective  To identify discrepancies in the implementation of the NHSN and the ACS NSQIP at hospitals that may be affecting the respective SSI rates.

Design, Setting, and Participants  A pilot sample of hospitals that participate in both the NHSN and the ACS NSQIP.

Interventions  For each hospital, observed rates and risk-adjusted observed to expected ratios for year 2012 colon SSIs were collected from both programs. The implementation methods of both programs were identified, including telephone interviews with infection preventionists who collect data for the NHSN at each hospital.

Main Outcomes and Measures  Collection methods and colon SSI rates for the NHSN at each hospital were compared with those of the ACS NSQIP.

Results  Of 16 hospitals, 11 were teaching hospitals with at least 500 beds. The mean observed colon SSI rates were dissimilar between the 2 programs, 5.7% (range, 2.0%-14.5%) for the NHSN vs 13.5% (range, 4.6%-26.7%) for the ACS NSQIP. The mean difference between the NHSN and the ACS NSQIP was 8.3% (range, 1.6%-18.8%), with the ACS NSQIP rate always higher. The correlation between the observed to expected ratios for the 2 programs was nonsignificant (Pearson product moment correlation, ρ = 0.4465; P = .08). The NHSN collection methods were dissimilar among interviewed hospitals. An SSI managed as an outpatient case would usually be missed under the current NHSN practices.

Conclusions and Relevance  Colon SSI rates from the NHSN and the ACS NSQIP cannot be used interchangeably to evaluate hospital performance and determine reimbursement. Hospitals should not use the ACS NSQIP colon SSI rates for the NHSN reports because that would likely result in the hospital being an outlier for performance. It is imperative to reconcile SSI monitoring, develop consistent definitions, and establish one reliable method. The current state hinders hospital improvement efforts by adding unnecessary confusion to the already complex arena of perioperative improvement.

Figures in this Article

Surgical site infection (SSI) is one of most commonly reported hospital-related infections and is associated with increased morbidity, length of hospital stay, and overall cost.1,2 The need for infection surveillance was recognized more than 40 years ago by the Centers for Disease Control and Prevention (CDC)3 and the Joint Commission on Accreditation of Hospitals4 and was shown to be effective in the prevention of SSIs.5 Since the establishment in 2005 of the CDC’s National Healthcare Safety Network (NHSN),6,7 SSI has emerged as the leading outcome measure of surgical quality. The NHSN’s data for SSIs occurring after colon surgery or abdominal hysterectomy are being used in the National Quality Forum SSI measure (No. 0753), which also has been incorporated into Medicare’s Hospital Inpatient Quality Reporting program in fiscal year 2011’s final rule, publicly reported on the Hospital Compare website since December 2012 and tied to payment determinations since February 2013 in the Medicare Hospital Value-Based Purchasing program.8

However, recent investigations have questioned the accuracy of the NHSN data and the sufficiency of its risk-adjustment methods.9 The New York State Department of Health audited the NHSN SSI data for colon procedures and found a 10.9% false-positive rate and a 39.6% false-negative rate.10 In an institutional study11 examining SSI surveillance after congenital cardiac surgery, the annual SSI rate differed 17% to 71% between the NHSN surveillance data and a national surgical registry data.

In addition to legally mandated participation in the NHSN in most states, many hospitals monitor SSIs using the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP), which has been shown to be superior to administrative claims data for identifying many postoperative complications.12 The ACS NSQIP uses the same verbal description definition for SSI as the NHSN. However, despite congruent definitions, hospitals report that their SSI rates and performance evaluations differ considerably between the 2 programs. Therefore, the objective of this study was to identify discrepancies in the implementation of the NHSN and the ACS NSQIP at hospitals that may be affecting the respective SSI rates and subsequent performance evaluations.

Study Design

The Northwestern University Institutional Review Board reviewed and approved the study. The study was deemed exempt, and informed consent was not obtained. Sixteen hospitals had noticed inconsistent data with the NHSN and the ACS NSQIP and volunteered to participate in this pilot study. Characteristics of each hospital were obtained from the American Hospital Association 2010 annual survey data.13 We focused on colon surgery because of its status as a current hospital performance measure.8 To understand the NHSN implementation at each hospital, 30-minute telephone interviews with infection preventionists who conduct colon surgery SSI surveillance were performed. Interviews with the ACS NSQIP data abstractors at each hospital were not necessary because the ACS NSQIP is conducted according to standardized processes and is audited to minimize variation in data abstraction. The ACS NSQIP clinical support team was consulted for clarification of data collection methods for hospitals that have questions about their data abstraction and for this study. Collection methods for the NHSN at each hospital were compared with each other and with the ACS NSQIP methods.

NHSN vs ACS NSQIP Colon SSI Rates

At each hospital, observed rates and risk-adjusted observed to expected (O:E) ratios for year 2012 colon SSIs according to the NHSN and the ACS NSQIP programs were collected and compared with each other. The current ACS NSQIP hospital evaluations report performance in terms of odds ratios; however, these were converted to O:E ratios for direct comparison with the NHSN O:E ratios. Pearson product moment correlation was used to compare the NHSN and the ACS NSQIP risk-adjusted colon SSI O:E ratios.

A software program (SAS, release 9.3; SAS Institute Inc) was used to perform statistical analyses. Statistical significance was set at .05.

Of 16 hospitals, 2 had fewer than 300 beds, 3 had 300 to 500 beds, and 11 had at least 500 or more beds (Table 1). All hospitals were nongovernmental and not for profit. Eleven hospitals had residency training programs approved by the Accreditation Council for Graduate Medical Education, 15 hospitals had accreditation by the Joint Commission on Accreditation of Hospitals, 12 hospitals were accredited by the ACS Commission on Cancer, and 9 hospitals were designated as a level I trauma center.

Table Graphic Jump LocationTable 1.  Characteristics of 16 Hospitals
NHSN Data Collection

The NHSN defines the numerator and the denominator for SSI surveillance.14 The NHSN identifies colon cases using International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes (Table 2). All colon cases, except those with an American Society of Anesthesiologists score of 6, are eligible for SSI surveillance (the denominator). Because of the lack of granularity of ICD-9-CM procedure codes, the NHSN codes will identify some proctectomy cases (total proctocolectomy with ileal pouch anal anastomosis and even proctectomy with ileal pouch anal anastomosis and low anterior resections). In the ACS NSQIP, these procedures would be included based on American Medical Association Current Procedural Terminology code designation in the proctectomy targeted procedure group. At some hospitals, infection control departments may eliminate proctectomies with ileal pouch anal anastomosis or low anterior resections after discussion with surgeons and recoding of the procedure by the hospital. The NHSN uses the CDC’s definition for SSI, including superficial incisional, deep incisional, and organ space.15 Any SSIs that occur within 30 days of eligible colon cases, except stoma site infections, are included as an event in the numerator. If multiple procedures are performed through the same incision (eg, colon and small-bowel resection) and an SSI occurs, the SSI event would be assigned only to the ICD-9-CM code with the highest risk. In previous years, small-bowel cases had higher risk assignment than colon cases. Since January 2013, colon surgery has been associated with the highest risk (second only to liver transplantation) for abdominal operations.14

Table Graphic Jump LocationTable 2.  Comparison of the Implementation Methods of the NHSN vs the ACS NSQIP

The NHSN provides some short online modules for infection preventionists on how to collect data for colon SSI surveillance, but these are not intended to be specific guidelines.14 As a result, the NHSN collection methods were found to vary among the participating hospitals. Although at least 6 of 16 hospitals interviewed used some form of electronic system (commercial or developed by the institution) to trigger cases to be reviewed by an infection preventionist, these trigger rules were not standardized among the hospitals. Readmission to the same hospital was the most commonly used trigger rule: if a patient who had undergone an eligible colon case was readmitted within the follow-up period, then an electronic trigger would flag that case to be reviewed by the infection preventionist. Some institutions added additional criteria such as the starting of antibiotics within 24 hours of readmission, the collection of a wound culture, a debridement surgical procedure, or a diagnostic code. Only one hospital included outpatient prescriptions for antibiotics as a trigger rule. For most of the institutions interviewed, the infection preventionists reviewed only cases that were identified by the electronic trigger system. The following SSIs would not usually trigger a review: SSI diagnosed during an index hospitalization, SSI managed as an outpatient case, or SSI treated at a hospital outside of the institutional system. The electronic surveillance system is run daily for some institutions, while for others it is run monthly.

Some institutions do not have an electronic trigger system. In this case, a data analyst would use ICD-9-CM codes to identify all colon cases to be reviewed by the infection preventionists. Then, the infection preventionists would review daily microbiology results for positive cultures after colon surgical procedures or hospital readmission records related to surgical wound infections. However, an SSI managed on an outpatient basis (the most common setting for the management of superficial SSIs) would again be missed.

At some institutions, each infection preventionist was assigned to certain procedure groups or floor units. Others had only a few infection preventionists with no assigned roles. Data auditing is also dependent on the institution. Some managers for infection prevention or quality improvement conduct audits of every case of SSI, and some managers perform spot checks. Most of the institutions reported having an inadequate workforce to conduct detailed data audits.

NHSN Hospital Performance Risk Adjustment

Since 2009, the NHSN has moved to a standardized infection ratio model.16 The standardized infection ratio is calculated by dividing the number of observed infections by the number of expected infections. The number of expected infections is estimated from multivariable logistic regression models using the NHSN baseline data (2006-2008), which represent a standard population’s SSI experience (adjusting for age, anesthesia type, American Society of Anesthesiologists class, duration of surgery, medical school affiliation, bed size, and wound classification).14 The NHSN provides hospitals with monthly reports.

ACS NSQIP Data Collection

To join the ACS NSQIP, a hospital must demonstrate that it has adequate resources and staffing for data collection and analysis. Dedicated surgical clinical reviewers, the data abstractors for the ACS NSQIP at each hospital, receive formal training and continuous support from the ACS NSQIP clinical support team.17 Protocols for data acquisition and transmission are defined in the ACS NSQIP operating manual to optimize accuracy.17 The surgical clinical reviewers make telephone calls, send out letters, and conduct public record searches in addition to reviewing all inpatient, outpatient, and available outside facility records to obtain complete 30-day follow-up data. To further ensure that the data are rigorously collected, members of the clinical support team visit samples of participating hospitals and perform audits. Hospitals that fail an audit receive further education and are removed from the reporting process until the hospital passes remediation.

Each numerator and denominator for collecting SSI events and other perioperative variables are defined in the operating manual.17 The ACS NSQIP identifies colon cases using Current Procedural Terminology codes (Table 2). Unless a hospital participates in the colectomy procedure targeted program option (in which all colectomy cases are reviewed), cases to be reviewed by the surgical clinical reviewers are selected as a systematic sample of general and vascular or multispecialty cases performed in the hospital. The ACS NSQIP uses the CDC’s verbal description definition for SSI. If the skin was left open, then superficial incisional SSI cannot be assigned, but deep incisional and organ-space SSIs can be. If multiple procedures were performed through the same incision and an SSI occurred after surgery, then the SSI would be assigned to the designated principal procedure. In addition, SSI cannot be assigned to the case if a wound infection was present at the time of surgery.

ACS NSQIP Hospital Performance Risk Adjustment

The ACS NSQIP uses hierarchical multivariable logistic regression modeling for hospital performance risk adjustment.18 This statistical approach accounts for the clustering of patients within hospitals, reduces false-positive rates due to multiple sampling, and adjusts for hospitals with small numbers of cases (Bayesian shrinkage or reliability adjustment). For each model, a forward selection process is used first to select a set of strong predictor variables from more than 30 available patient variables. C statistics and Brier scores are used to evaluate the discrimination and calibration of the models. Brier score accounts for discrimination and calibration of the models, with a score range from 0.0 to 1.0 and with 0.0 being the perfect prediction. This statistical approach levels the playing fields for hospitals with large proportions of high-risk patients to those that do not and for hospitals with high case volumes to those that are low volume. Every 3 months, the ACS NSQIP provides hospitals with risk-adjusted reports that allow participating hospitals to compare their risk-adjusted outcomes with those of other hospitals. In addition, the ACS NSQIP provides continuous real-time, risk-adjusted estimates for 6 measure models, including one for colon surgery death or serious morbidity.

NHSN vs ACS NSQIP Colon SSI Rates

The mean observed colon SSI rates were dissimilar between the 2 programs at 5.7% (range, 2.0%-14.5%) for the NHSN vs 13.5% (range, 4.6%-26.7%) for the ACS NSQIP (Table 3). The mean difference between the NHSN and the ACS NSQIP was 8.3% (range, 1.6%-18.8%), with the ACS NSQIP rate always higher. The risk-adjusted O:E ratios for the 2 programs were also different (Figure). The correlation between the risk-adjusted O:E ratios for the 2 programs was nonsignificant (Pearson product moment correlation, ρ = 0.4465; P = .08). One hospital declined to share its NHSN observed rate.

Table Graphic Jump LocationTable 3.  Observed Colon SSI Rates for the NHSN vs the ACS NSQIP per Hospital
Place holder to copy figure label and caption
Figure.
Risk-Adjusted O:E Ratios for the NHSN vs the ACS NSQIP per Hospital

ACS NSQIP indicates American College of Surgeons National Surgical Quality Improvement Program; NHSN, National Healthcare Safety Network; and O:E Ratio, observed to expected ratio.

Graphic Jump Location

After examining 16 hospitals that collect data on SSIs after colon surgery for both the NHSN and the ACS NSQIP, we found considerable differences in the implementation between the 2 programs, with marked variation in that of the NHSN. None of the hospitals interviewed had the same NHSN program implementation as the other hospitals in the group. Some hospitals use electronic trigger rules (not standardized among the hospitals) to select cases for review by infection preventionists, while infection preventionists at other hospitals review daily microbiology results or hospital readmission records for potential cases. Most of the time, an SSI managed on an outpatient basis (the most common setting for the management of superficial SSIs) would be missed under the current NHSN practices. In contrast, the ACS NSQIP abstractors reviewed all available inpatient and outpatient records of sampled colon cases for 30 days after surgery. In addition, follow-up letters and telephone calls were made to reduce the chance of missing outside hospital care or clinic records or missing outpatient diagnosis and the management of infections. With such variation in program implementation, the resulting data on SSI rates and hospital performance differed between the NHSN and the ACS NSQIP, with an 8.3% mean difference in infection rates. The ACS NSQIP rates were also always higher. These findings were similar to those in the study by Atchley et al11 comparing the NHSN data with The Society of Thoracic Surgeons Congenital Heart Surgery Database. In addition, the correlation between risk-adjusted hospital performances on colon SSIs according to the 2 programs was not statistically significant. A hospital performing well in one program might be performing poorly according to the other program.

Although a consensus panel report was published in 1998 with infrastructure requirements for infection control and prevention,19 the document failed to give detailed instructions on how to carry out such requirements, which might have contributed to the large variation in the NHSN program implementation across hospitals. In a study of approximately 1000 acute care hospitals, Stone et al20 found large variation in the organization and structure of infection control and prevention programs. Interpretations of hospital-associated infection definitions were also shown to be heterogeneous in a survey of more than 100 infection preventionists and hospital epidemiologists.21 Many of the infection preventionists interviewed for the present study had stated that the lack of detailed protocols on how to obtain and interpret data is one of the greatest weaknesses of the NHSN.

The recent study by Stone et al20 found that only 34% of acute care hospitals use electronic surveillance systems. Some might argue that there would not be as much variation among hospitals if all hospitals used electronic systems for the NHSN case findings. In a single-institution study,22 however, an electronic surveillance system did not identify a large proportion of SSIs and had different risk factor and cost estimates compared with the ACS NSQIP data abstracted by the surgical clinical reviewers. In addition, we found that trigger rules and data sources are not consistent across the hospitals that use electronic surveillance systems.

With such inconsistencies in program implementation and data interpretation across the NHSN hospitals, it is not surprising that the resulting SSI rates were incongruent and differed greatly from clinical registry data, as demonstrated from previous state10 and institutional11 studies. In contrast, Shiloach et al23 showed that the ACS NSQIP collected robust data through its training and audit procedure and found that interrater reliability audits had improved over the years. Another study12 also showed the superiority of the ACS NSQIP over administrative claims data for identifying postoperative complications. Our comparisons of colon SSI rates revealed that SSI rates from the NHSN were lower than those from the ACS NSQIP by a mean difference of 8.3% (range, 1.6%-18.8%).

Multiple patient comorbidities, such as diabetes mellitus, malnutrition, anemia, and tobacco use, have been shown to be significant risk factors for SSIs24 and have been used in risk adjustment of hospital performance on colon SSIs to account for hospitals with greater proportions of high-risk patients. The NHSN reports a standardized infection ratio, which is analogous to an O:E ratio calculated using multivariable logistic regression with a fixed set of predictors. In comparison, the ACS NSQIP uses hierarchical multivariable logistic regression, which not only adjusts for different patient characteristics but also accounts both for the clustering of patients and for hospitals with smaller numbers of patients during the current collection period.18 With different rates and risk-adjustment methods, it is not surprising that there were no statistically significant correlations between the NHSN and the ACS NSQIP risk-adjusted O:E ratios.

These discrepancies come at a time when hospitals are experiencing intense pressure to save money and improve the quality of care. Multiple programs at the federal, state, and local levels have engaged hospitals in initiatives to improve care but require significant hospital resource commitments. Duplicative data abstractions have diluted efforts and distracted health care providers to focus on data discrepancies rather than improving care. An opportunity exists to standardize definitions, to improve the consistency of collection methods, and to ensure rigor of data by implementing formal audits.

This study should be considered with its limitations. A sampling bias might exist because we interviewed a pilot sample of 16 hospitals that may not represent all hospitals in the United States. These hospitals had noticed inconsistent data with the NHSN and the ACS NSQIP and were willing to participate in the study. It is possible that the differences we found between the NHSN and the ACS NSQIP might be smaller in other hospitals. Other hospital infrastructure might affect the reliability of the NHSN and ACS NSQIP data, including the presence of inpatient or outpatient electronic medical records, hospital-employed surgeons vs independent surgeons, and the locus of the ACS NSQIP and NHSN data abstraction in a hospital’s organizational structure. Regardless, the inconsistences that we described question the validity of the NHSN data and suggest that the NHSN and ACS NSQIP data are not interchangeable. In addition, while the current ACS NSQIP reporting uses odds ratios in hospital performance evaluations, this work used O:E ratios to match those of the NHSN. However, this conversion could have introduced only a small element of drift in these comparisons.

Colon SSI rates from the NHSN and the ACS NSQIP should not be used interchangeably to evaluate hospital performance for the purposes of quality improvement, public reporting, or pay for performance at this time. Similarly, hospitals should not use the ACS NSQIP colon SSI rates for NHSN reporting because that practice would likely result in false comparisons and apparent poorer performance status. Great variation exists among hospitals in data collection methods within the NHSN. It is imperative to establish one reliable method for SSI monitoring. The current state is likely hindering hospital improvement efforts by adding unnecessary confusion to the already complex task of measuring perioperative performance. Hospitals are potentially spending unnecessary time and resources collecting duplicative data and being distracted by discrepancies in reports.

Accepted for Publication: August 4, 2014.

Corresponding Author: Elizabeth C. Wick, MD, Department of Surgery, The Johns Hopkins University, 600 N Wolfe St, Blalock Room 658, Baltimore, MD 21287 (ewick1@jhmi.edu).

Published Online: November 26, 2014. doi:10.1001/jamasurg.2014.2891.

Author Contributions: Dr Wick had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Study concept and design: All authors.

Acquisition, analysis, or interpretation of data: Ju, Ko, Hall, Bilimoria, Wick.

Drafting of the manuscript: Ju, Wick.

Critical revision of the manuscript for important intellectual content: All authors.

Statistical analysis: Ju.

Administrative, technical, or material support: Ko, Hall.

Study supervision: Wick.

Conflict of Interest Disclosures: Dr Ju reported receiving a stipend that is partially supported by grant 5T32HL094293 from the National Institutes of Health and the American College of Surgeons Clinical Scholars in Residence program. Dr Hall reported being a paid consultant for the American College of Surgeons. No other disclosures were reported.

Boltz  MM, Hollenbeak  CS, Julian  KG, Ortenzi  G, Dillon  PW.  Hospital costs associated with surgical site infections in general and vascular surgery patients. Surgery. 2011;150(5):934-942.
PubMed   |  Link to Article
Hollenbeak  CS, Murphy  D, Dunagan  WC, Fraser  VJ.  Nonrandom selection and the attributable cost of surgical-site infections. Infect Control Hosp Epidemiol. 2002;23(4):177-182.
PubMed   |  Link to Article
Garner  JS, Bennett  JV, Scheckler  WE, Maki  DG, Brachman  PS. Surveillance of nosocomial infections. In: Proceedings of the International Conference on Nosocomial Infections. Atlanta, GA: Centers for Disease Control and Prevention; 1970.
Joint Commission on Accreditation of Hospitals. Accreditation Manual for Hospitals. Chicago, IL: Joint Commission on Accreditation of Hospitals; 1976.
Haley  RW, Culver  DH, White  JW,  et al.  The efficacy of infection surveillance and control programs in preventing nosocomial infections in US hospitals. Am J Epidemiol. 1985;121(2):182-205.
PubMed
Tokars  JI, Richards  C, Andrus  M,  et al.  The changing face of surveillance for health care–associated infections. Clin Infect Dis. 2004;39(9):1347-1352.
PubMed   |  Link to Article
Dudeck  MA, Horan  TC, Peterson  KD,  et al.  National Healthcare Safety Network (NHSN) report, data summary for 2009, device-associated module. Am J Infect Control. 2011;39(5):349-367.
PubMed   |  Link to Article
Centers for Medicare and Medicaid Services (CMS), HHS.  Medicaid program; payment adjustment for provider-preventable conditions including health care-acquired conditions. Final rule. Fed Regist. 2011;76(108):32816-32838.
PubMed
Berríos-Torres  SI, Mu  Y, Edwards  JR, Horan  TC, Fridkin  SK.  Improved risk adjustment in public reporting: coronary artery bypass graft surgical site infections. Infect Control Hosp Epidemiol. 2012;33(5):463-469.
PubMed   |  Link to Article
Haley  VB, Van Antwerpen  C, Tserenpuntsag  B,  et al.  Use of administrative data in efficient auditing of hospital-acquired surgical site infections, New York State 2009-2010. Infect Control Hosp Epidemiol. 2012;33(6):565-571.
PubMed   |  Link to Article
Atchley  KD, Pappas  JM, Kennedy  AT,  et al.  Use of administrative data for surgical site infection surveillance after congenital cardiac surgery results in inaccurate reporting of surgical site infection rates. Ann Thorac Surg. 2014;97(2):651-658.
PubMed   |  Link to Article
Lawson  EH, Louie  R, Zingmond  DS,  et al.  A comparison of clinical registry versus administrative claims data for reporting of 30-day surgical complications. Ann Surg. 2012;256(6):973-981.
PubMed   |  Link to Article
American Hospital Association Data Viewer. American Hospital Association annual survey database.http://www.ahadataviewer.com/. Accessed September 29, 2014.
Centers for Disease Control and Prevention National Healthcare Safety Network. Surveillance for surgical site infection (SSI) events.http://www.cdc.gov/nhsn/acute-care-hospital/ssi/. Accessed April 27, 2014.
Mangram  AJ, Horan  TC, Pearson  ML, Silver  LC, Jarvis  WR; Centers for Disease Control and Prevention (CDC) Hospital Infection Control Practices Advisory Committee.  Guideline for prevention of surgical site infection, 1999. Am J Infect Control. 1999;27(2):96-134.
PubMed   |  Link to Article
Centers for Disease Control and Prevention National Healthcare Safety Network e-news. Your guide to the standardized infection ratio (SIR).http://www.cdc.gov/nhsn/PDFs/Newsletters/NHSN_NL_OCT_2010SE_final.pdf. Accessed April 27, 2014.
American College of Surgeons. American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP).http://site.acsnsqip.org/program-specifics/. Accessed April 27, 2014.
Cohen  ME, Ko  CY, Bilimoria  KY,  et al.  Optimizing ACS NSQIP modeling for evaluation of surgical quality and risk: patient risk adjustment, procedure mix adjustment, shrinkage adjustment, and surgical focus. J Am Coll Surg. 2013;217(2):336-346, e1.
PubMed   |  Link to Article
Scheckler  WE, Brimhall  D, Buck  AS,  et al; Society for Healthcare Epidemiology of America.  Requirements for infrastructure and essential activities of infection control and epidemiology in hospitals: a consensus panel report. Am J Infect Control. 1998;26(1):47-60.
PubMed   |  Link to Article
Stone  PW, Pogorzelska-Maziarz  M, Herzig  CT,  et al.  State of infection prevention in US hospitals enrolled in the National Health and Safety Network. Am J Infect Control. 2014;42(2):94-99.
PubMed   |  Link to Article
Keller  SC, Linkin  DR, Fishman  NO, Lautenbach  E.  Variations in identification of healthcare-associated infections. Infect Control Hosp Epidemiol. 2013;34(7):678-686.
PubMed   |  Link to Article
Hollenbeak  CS, Boltz  MM, Nikkel  LE, Schaefer  E, Ortenzi  G, Dillon  PW.  Electronic measures of surgical site infection: implications for estimating risks and costs. Infect Control Hosp Epidemiol. 2011;32(8):784-790.
PubMed   |  Link to Article
Shiloach  M, Frencher  SK  Jr, Steeger  JE,  et al.  Toward robust information: data quality and inter-rater reliability in the American College of Surgeons National Surgical Quality Improvement Program. J Am Coll Surg. 2010;210(1):6-16.
PubMed   |  Link to Article
Malone  DL, Genuit  T, Tracy  JK, Gannon  C, Napolitano  LM.  Surgical site infections: reanalysis of risk factors. J Surg Res. 2002;103(1):89-95.
PubMed   |  Link to Article

Figures

Place holder to copy figure label and caption
Figure.
Risk-Adjusted O:E Ratios for the NHSN vs the ACS NSQIP per Hospital

ACS NSQIP indicates American College of Surgeons National Surgical Quality Improvement Program; NHSN, National Healthcare Safety Network; and O:E Ratio, observed to expected ratio.

Graphic Jump Location

Tables

Table Graphic Jump LocationTable 1.  Characteristics of 16 Hospitals
Table Graphic Jump LocationTable 2.  Comparison of the Implementation Methods of the NHSN vs the ACS NSQIP
Table Graphic Jump LocationTable 3.  Observed Colon SSI Rates for the NHSN vs the ACS NSQIP per Hospital

References

Boltz  MM, Hollenbeak  CS, Julian  KG, Ortenzi  G, Dillon  PW.  Hospital costs associated with surgical site infections in general and vascular surgery patients. Surgery. 2011;150(5):934-942.
PubMed   |  Link to Article
Hollenbeak  CS, Murphy  D, Dunagan  WC, Fraser  VJ.  Nonrandom selection and the attributable cost of surgical-site infections. Infect Control Hosp Epidemiol. 2002;23(4):177-182.
PubMed   |  Link to Article
Garner  JS, Bennett  JV, Scheckler  WE, Maki  DG, Brachman  PS. Surveillance of nosocomial infections. In: Proceedings of the International Conference on Nosocomial Infections. Atlanta, GA: Centers for Disease Control and Prevention; 1970.
Joint Commission on Accreditation of Hospitals. Accreditation Manual for Hospitals. Chicago, IL: Joint Commission on Accreditation of Hospitals; 1976.
Haley  RW, Culver  DH, White  JW,  et al.  The efficacy of infection surveillance and control programs in preventing nosocomial infections in US hospitals. Am J Epidemiol. 1985;121(2):182-205.
PubMed
Tokars  JI, Richards  C, Andrus  M,  et al.  The changing face of surveillance for health care–associated infections. Clin Infect Dis. 2004;39(9):1347-1352.
PubMed   |  Link to Article
Dudeck  MA, Horan  TC, Peterson  KD,  et al.  National Healthcare Safety Network (NHSN) report, data summary for 2009, device-associated module. Am J Infect Control. 2011;39(5):349-367.
PubMed   |  Link to Article
Centers for Medicare and Medicaid Services (CMS), HHS.  Medicaid program; payment adjustment for provider-preventable conditions including health care-acquired conditions. Final rule. Fed Regist. 2011;76(108):32816-32838.
PubMed
Berríos-Torres  SI, Mu  Y, Edwards  JR, Horan  TC, Fridkin  SK.  Improved risk adjustment in public reporting: coronary artery bypass graft surgical site infections. Infect Control Hosp Epidemiol. 2012;33(5):463-469.
PubMed   |  Link to Article
Haley  VB, Van Antwerpen  C, Tserenpuntsag  B,  et al.  Use of administrative data in efficient auditing of hospital-acquired surgical site infections, New York State 2009-2010. Infect Control Hosp Epidemiol. 2012;33(6):565-571.
PubMed   |  Link to Article
Atchley  KD, Pappas  JM, Kennedy  AT,  et al.  Use of administrative data for surgical site infection surveillance after congenital cardiac surgery results in inaccurate reporting of surgical site infection rates. Ann Thorac Surg. 2014;97(2):651-658.
PubMed   |  Link to Article
Lawson  EH, Louie  R, Zingmond  DS,  et al.  A comparison of clinical registry versus administrative claims data for reporting of 30-day surgical complications. Ann Surg. 2012;256(6):973-981.
PubMed   |  Link to Article
American Hospital Association Data Viewer. American Hospital Association annual survey database.http://www.ahadataviewer.com/. Accessed September 29, 2014.
Centers for Disease Control and Prevention National Healthcare Safety Network. Surveillance for surgical site infection (SSI) events.http://www.cdc.gov/nhsn/acute-care-hospital/ssi/. Accessed April 27, 2014.
Mangram  AJ, Horan  TC, Pearson  ML, Silver  LC, Jarvis  WR; Centers for Disease Control and Prevention (CDC) Hospital Infection Control Practices Advisory Committee.  Guideline for prevention of surgical site infection, 1999. Am J Infect Control. 1999;27(2):96-134.
PubMed   |  Link to Article
Centers for Disease Control and Prevention National Healthcare Safety Network e-news. Your guide to the standardized infection ratio (SIR).http://www.cdc.gov/nhsn/PDFs/Newsletters/NHSN_NL_OCT_2010SE_final.pdf. Accessed April 27, 2014.
American College of Surgeons. American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP).http://site.acsnsqip.org/program-specifics/. Accessed April 27, 2014.
Cohen  ME, Ko  CY, Bilimoria  KY,  et al.  Optimizing ACS NSQIP modeling for evaluation of surgical quality and risk: patient risk adjustment, procedure mix adjustment, shrinkage adjustment, and surgical focus. J Am Coll Surg. 2013;217(2):336-346, e1.
PubMed   |  Link to Article
Scheckler  WE, Brimhall  D, Buck  AS,  et al; Society for Healthcare Epidemiology of America.  Requirements for infrastructure and essential activities of infection control and epidemiology in hospitals: a consensus panel report. Am J Infect Control. 1998;26(1):47-60.
PubMed   |  Link to Article
Stone  PW, Pogorzelska-Maziarz  M, Herzig  CT,  et al.  State of infection prevention in US hospitals enrolled in the National Health and Safety Network. Am J Infect Control. 2014;42(2):94-99.
PubMed   |  Link to Article
Keller  SC, Linkin  DR, Fishman  NO, Lautenbach  E.  Variations in identification of healthcare-associated infections. Infect Control Hosp Epidemiol. 2013;34(7):678-686.
PubMed   |  Link to Article
Hollenbeak  CS, Boltz  MM, Nikkel  LE, Schaefer  E, Ortenzi  G, Dillon  PW.  Electronic measures of surgical site infection: implications for estimating risks and costs. Infect Control Hosp Epidemiol. 2011;32(8):784-790.
PubMed   |  Link to Article
Shiloach  M, Frencher  SK  Jr, Steeger  JE,  et al.  Toward robust information: data quality and inter-rater reliability in the American College of Surgeons National Surgical Quality Improvement Program. J Am Coll Surg. 2010;210(1):6-16.
PubMed   |  Link to Article
Malone  DL, Genuit  T, Tracy  JK, Gannon  C, Napolitano  LM.  Surgical site infections: reanalysis of risk factors. J Surg Res. 2002;103(1):89-95.
PubMed   |  Link to Article

Correspondence

CME
Also Meets CME requirements for:
Browse CME for all U.S. States
Accreditation Information
The American Medical Association is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The AMA designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 CreditTM per course. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Physicians who complete the CME course and score at least 80% correct on the quiz are eligible for AMA PRA Category 1 CreditTM.
Note: You must get at least of the answers correct to pass this quiz.
Please click the checkbox indicating that you have read the full article in order to submit your answers.
Your answers have been saved for later.
You have not filled in all the answers to complete this quiz
The following questions were not answered:
Sorry, you have unsuccessfully completed this CME quiz with a score of
The following questions were not answered correctly:
Commitment to Change (optional):
Indicate what change(s) you will implement in your practice, if any, based on this CME course.
Your quiz results:
The filled radio buttons indicate your responses. The preferred responses are highlighted
For CME Course: A Proposed Model for Initial Assessment and Management of Acute Heart Failure Syndromes
Indicate what changes(s) you will implement in your practice, if any, based on this CME course.

Multimedia

Some tools below are only available to our subscribers or users with an online account.

2,193 Views
7 Citations
×

Related Content

Customize your page view by dragging & repositioning the boxes below.

See Also...
Articles Related By Topic
Related Collections
PubMed Articles
Jobs