Statewide Validation of Hospital-Reported Central Line–Associated Bloodstream Infections: Oregon, 2009
Background. Mandatory reporting of healthcare-associated infections is common, but underreporting by hospitals limits meaningful interpretation.
Objective. To validate mandatory intensive care unit (ICU) central line–associated bloodstream infection (CLABSI) reporting by Oregon hospitals.
Design. Blinded comparison of ICU CLABSI determination by hospitals and health department–based external reviewers with group adjudication.
Setting. Forty-four Oregon hospitals required by state law to report ICU CLABSIs.
Participants. Seventy-six patients with ICU CLABSIs and a systematic sample of 741 other patients with ICU-related bacteremia episodes.
Methods. External reviewers examined medical records and determined CLABSI status. All cases with CLABSI determinations discordant from hospital reporting were adjudicated through formal discussion with hospital staff, a process novel to validation of CLABSI reporting.
Results. Hospital representatives and external reviewers agreed on CLABSI status in 782 (96%) of 817 bacteremia episodes ( [95% confidence interval (CI), 0.70–0.84]). Among the 27 episodes identified as CLABSIs by external reviewers but not reported by hospitals, the final status was CLABSI in 16 (59%). The measured sensitivities of hospital ICU CLABSI reporting were 72% (95% CI, 62%–81%) with adjudicated CLABSI determination as the reference standard and 60% (95% CI, 51%–69%) with external review alone as the reference standard (). Validation increased the statewide ICU CLABSI rate from 1.21 (95% CI, 0.95–1.51) to 1.54 (95% CI, 1.25–1.88) CLABSIs/1,000 central line–days; ICU CLABSI rates increased by more than 1.00 CLABSI/1,000 central line–days in 6 (14%) hospitals.
Conclusions. Validating hospital CLABSI reporting improves accuracy of hospital-based CLABSI surveillance. Discussing discordant findings improves the quality of validation.
Central line–associated bloodstream infections (CLABSIs) are deadly, costly, and preventable.1-6 Measuring and preventing CLABSIs constitute a national public health priority that has emerged as an integral component of hospital quality improvement and patient safety programs.7 Despite limited evidence that public reporting improves healthcare outcomes, states have increasingly mandated that hospitals publicly report CLABSI rates to demonstrate quality of care, accountability, and transparency.8,9 As of March 2011, a total of 29 states and the District of Columbia have mandated public reporting of healthcare-associated infections (HAIs), most commonly including CLABSIs.10 Furthermore, beginning in 2011, the Centers for Medicare and Medicaid Services required certain hospitals to report CLABSIs in intensive care units (ICUs) for full payment under the inpatient prospective payment system.11 Most mandates for reporting HAIs require hospitals to submit data through the Centers for Disease Control and Prevention (CDC) National Healthcare Safety Network (NHSN), which provides standardized data collection protocols, case definitions, and training modules.
In 2007, Oregon passed legislation that mandated hospitals to report HAIs on an ongoing basis.12 Beginning in 2009, on the basis of input from a statewide, multidisciplinary advisory committee on prevention of HAIs, Oregon hospitals reported CLABSIs attributed to ICUs as well as surgical-site infections associated with coronary artery bypass grafting and prosthetic knee implantation. These infection types were selected on the basis of their measurability and public health importance, including high mortality and morbidity, potential impact in a substantial number of hospitals, and amenability to prevention through evidence-based infection control practices.
Previous studies have reported that when compared with hospitals having a reference standard of independent reviewers, hospitals have underreported CLABSIs.13-15 Backman et al13 reported that Connecticut hospitals reported only 23 (48%) of 48 CLABSIs identified by external review. Conversely, the specificity and positive predictive value of hospital CLABSI reporting are high (ie, when hospitals do report a CLABSI, it usually meets the CLABSI case definition).13,14 Systematic underreporting (poor sensitivity) of CLABSI cases leads to a falsely low CLABSI rate that might make hospitals complacent in addressing potential opportunities to improve quality of care.
Because a publicly reported CLABSI rate is based on a hospital’s self-reported data through NHSN, inaccurate data can lead to misleading conclusions regarding the validity of the hospital’s infection prevention program. Validation of hospital CLABSI reports improves data integrity and meaningful comparison of hospital infection rates. Moreover, validation is part of an optimal hospital-based CLABSI surveillance program because it reinforces accepted CLABSI surveillance practices. To maximize hospital acceptability, validation results should be transparent, understandable, and credible and should acknowledge that both hospitals and external reviewers can overlook or misinterpret key information relevant to establishing CLABSI status.
The Oregon Public Health Division (OPHD) sought to validate the accuracy of CLABSI reporting by reviewing calendar year 2009 hospitalization records from Oregon hospitals. Unlike other validation studies,13,14 however, in every instance of discrepancy, we incorporated postreview discussion between external health department reviewers and hospital infection preventionists (IPs) and physicians while adjudicating the presence or absence of CLABSI.
Selection of Hospitals
Among Oregon’s 58 nonfederal acute care hospitals, 14 were not required by state law to report CLABSIs attributed to an ICU setting, because they either did not have an ICU or had no more than 10 ICU patients with a central line procedure annually.16 The remaining 44 hospitals that were required to report CLABSIs monthly to NHSN in 2009 were included in our validation study; 23 had at least 1 full-time-equivalent IP, and none received funding for participating in the validation.
Selection of Patients for Medical Record Reviews
We reviewed medical records of all patients for whom the hospital had reported an ICU CLABSI. To determine whether CLABSIs were unreported, we asked IPs and laboratory directors to identify all patients who had an ICU-related bacteremia episode during 2009. An ICU-related bacteremia episode was defined as 1 or more cultured blood samples (including likely skin contaminants) drawn in the ICU or up to 48 hours after an ICU stay and positive for an organism not isolated from blood up to 14 days before ICU admission. The unit of analysis was an ICU-related bacteremia episode; patients could have had multiple ICU-related bacteremia episodes separated by time. However, multiple organisms present in a single blood culture were not counted separately. In 37 of the 44 hospitals, no more than 60 ICU-related bacteremia episodes were not reported as CLABSIs; we reviewed all of these records (census hospitals). For the 7 hospitals with more than 60 ICU-related bacteremia episodes not reported as CLABSIs, we reviewed a random sample of up to 44 records not reported as CLABSIs (sampled hospitals). The investigation was considered standard public health surveillance practice and did not require review by the institutional review board.
Medical Record Reviews
Five OPHD personnel conducted the medical record reviews: a physician board certified in internal medicine and preventive medicine (J.Y.O.), an infection prevention nurse (J.T.), a public health nurse (S.W.M.), an epidemiologist (M.C.C.), and a research analyst/principal investigator (Z.G.B.). Four of the 5 reviewers attended CDC training in applying NHSN HAI surveillance definitions that was part of a special study of CDC’s Emerging Infections Program; the remaining reviewer completed the same Internet-based NHSN training modules as hospital IPs and studied the NHSN Patient Safety Component Manual.17 Reviewers were blinded to the hospital’s CLABSI determination. The reviews occurred during March 2010–April 2011, after the hospitals had reported their 2009 CLABSIs. We used a standardized 2-page form to collect data necessary for ascertaining whether the patient had an ICU CLABSI in accordance with the NHSN surveillance definition.17 All medical record reviews were done on site at the hospital by 2–4 OPHD personnel during a single visit. Reviewers determined whether a CLABSI was present, consulting with other reviewers as needed. Multiple reviews of the same record were not routinely performed. We estimated that an average of 30 minutes was required to review each medical record.
Adjudication of Divergent CLABSI Assessments
Hospitals were notified of cases in which the OPHD CLABSI determinations were discordant from hospital results. From 2 to 4 weeks after the medical record review, we reviewed these discordant findings during a single telephone conference call with each hospital’s representatives, including an IP and at least 1 hospital physician (typically an infectious-diseases specialist). An OPHD board-certified infectious-diseases physician (P.R.C.), who was familiar with NHSN HAI surveillance definitions, also participated, providing clinical expertise to help interpret difficult classification cases. We mutually determined the presence or absence of CLABSI on the basis of the NHSN CLABSI surveillance definition. Persistent disagreements and particularly complex cases were referred to CDC NHSN staff for consultation and adjudication.
Estimation of Statewide CLABSIs
Because 7 hospitals underwent sampling of ICU-related bacteremia episodes not reported as CLABSIs, we adjusted our results to estimate the number of statewide CLABSIs, using the method described by Kelly et al.18 For each sampled hospital, the count of ICU-related bacteremia episodes not reported as CLABSIs (cells C [“false negative”] and D [“true negative”] in Figure 1) was divided by the sampling fraction for that hospital. Adjustment enabled us to calculate the sensitivity and specificity of hospital ICU CLABSI reporting in Oregon on the basis of the final determination of CLABSIs as the reference standard. Without adjustment for sampling of ICU-related bacteremia episodes not reported as CLABSIs, sensitivity would have been falsely elevated and specificity would have been falsely decreased.
The κ statistic was used to determine interrater reliability of CLABSI determinations between hospitals and OPHD reviewers. χ2 tests were used to compare sensitivity of CLABSI reporting. χ2, Student t, Fisher exact, and Wilcoxon rank-sum tests were used to compare CLABSI cases that were correctly reported with those that were not reported by hospitals. All analyses were conducted in SAS 9.2 (SAS Institute).
The median number of records reviewed at each hospital was 14.5 (range: 0–53). In 18 (41%) of the 44 hospitals, the number of records reviewed was no more than 10. Among 817 medical records reviewed, the hospitals reported 76 CLABSIs, and the OPHD initially detected 95 CLABSIs (Table 1). After adjudication, 86 of the records were classified as indicating CLABSIs. Four records required CDC NHSN consultation, of which 2 were classified as indicating CLABSIs. The hospitals and the OPHD initially agreed on CLABSI status in 782 (96%) of the 817 records (; 95% confidence interval [CI], 0.70–0.84). Of the 35 records in which the hospitals and the OPHD initially disagreed as to whether a CLABSI was present, we adjudicated 18 (51%) of these cases as CLABSIs and 17 (49%) as not CLABSIs. Furthermore, in 13 (37%) of the 35 cases with discordant findings, the hospital’s findings were deemed correct; in the other 22 (63%), the OPHD’s disposition was deemed correct.
|Hospital report||Health department review||Final determination||No. (%) of patients||Agreement between hospital and health department|
Of the 16 ICU-related bacteremias that were adjudicated to be CLABSIs but had not been reported by the hospital, 7 (44%) had no clearly discernible reason for the failure to detect CLABSI (Table 2); during the conference call with the hospitals, we concluded that the hospital recognized the presence of an ICU-related bacteremia but not that the ICU-related bacteremia met the NHSN CLABSI case definition. In another 7 (44%) cases, the hospital failed to recognize the CLABSI because the ICU-related bacteremia episode was mistakenly deemed secondary to another source of infection (eg, intra-abdominal). When patient characteristics of CLABSIs that were correctly reported (true positive) were compared with those of CLABSIs that were not reported (false negative), no statistically significant differences were found in patient age, sex, hospital length of stay, placement of the central line at least 7 days before CLABSI, infecting organism, or isolation of an organism that might be a contaminant (NHSN laboratory-confirmed bloodstream infection [BSI] criterion 2; Table 3). Among the 6 cases reported by the hospital as ICU CLABSIs but determined to be incorrect (false positive), reasons included the hospital incorrectly attributing a CLABSI to the ICU (3), failure to assign the infection to another source (1), failure to note that the infection was present on admission (1), and not recognizing that an isolated organism was likely a contaminant (1).
|Reason||No. (%) of episodes|
|No clearly discernible reason determined||7 (44)|
|Misattributed CLABSI to alternative source of infection||7 (44)|
|Recognized CLABSI but failed to attribute to ICU||1 (6)|
|Misclassified CLABSI as present at admission||1 (6)|
|Characteristic||Correctly reported by hospital (n = 70)||Not reported by hospital (n = 16)||P|
|Age, years, mean (range)a||57.0 (20–81)||61.7 (44–87)||.23|
|Male sex||43 (61)||9 (56)||.70|
|NHSN LCBI criterion 2 (BSI potentially a contaminant)||17 (24)||4 (25)||1.00|
|Hospital length of stay, days, median (range)b||27 (2–101)||24.5 (5–74)||.24|
|Central line placed ≥7 days before CLABSIa||15 (21)||3 (19)||1.00|
|Candida sp.||20 (29)||6 (38)||.97|
|Coagulase-negative Staphylococcus||16 (23)||4 (25)|
|Staphylococcus aureus||12 (17)||2 (12)|
|Enterococcus sp.||8 (11)||1 (6)|
|Other||14 (20)||3 (19)|
Table 4 indicates final adjudication for the 76 reported CLABSIs and, adjusting for the sampling fraction, the estimated number of unreported CLABSIs. After adjustment for sampling, we estimated 97 ICU CLABSIs throughout Oregon in 2009, a 28% increase from the 76 reported CLABSI cases. We estimated that 8% of all positive blood cultures in ICU patients were CLABSIs. When final CLABSI status was used as the reference standard, the sensitivity of hospital reporting was 72% (95% CI, 62%–81%); specificity was 99%, positive predictive value was 92%, and negative predictive value was 98%. Reporting by the 7 sampled hospitals, which collectively totaled 31,509 (50.0%) of Oregon’s 63,027 total ICU central line–days, was 76% sensitive (95% CI, 64%–85%), compared with a sensitivity of 63% (95% CI, 44%–79%) in the 37 census hospitals (). Discussing discordant findings during adjudication increased the measured sensitivity of hospital CLABSI reporting; with only the OPHD external review as the reference standard, the sensitivity of CLABSI reporting by all 44 hospitals would have been 60% (95% CI, 51%–69%; for comparison with final determination).
After discussion and adjudication of discordant cases, the statewide rate for ICU CLABSIs per 1,000 central line–days increased from 1.21 (95% CI, 0.95–1.51) to 1.54 (95% CI, 1.25–1.88). The CLABSI validation did not change the hospital’s reported ICU CLABSI rate in 33 (75%) of the 44 hospitals; in each of these hospitals, all records reviewed were correctly reported (Table 5). In 23 (70%) of these 33 hospitals, no CLABSIs were identified either before or after the validation; the total number of ICU-related bacteremia episodes reviewed among these 23 hospitals was 165. One hospital reported 2 ICU CLABSIs that were ultimately determined not to be ICU CLABSIs; this hospital’s ICU CLABSI rate decreased from 0.70/1,000 central line–days to 0. In contrast, 6 hospitals (14%) had increases in ICU CLABSI rate of more than 1.00/1,000 central line–days. These 6 hospitals were heterogeneous, with ICU bed capacity ranging from 4 to 146. Four of the 6 were census hospitals, which collectively reported 1 ICU CLABSI but were determined during the validation to have had 8 ICU CLABSIs. The 2 sampled hospitals reported 28 ICU CLABSIs but were estimated after validation to have had 40 ICU CLABSIs.
Mandatory CLABSI reporting systems without external validation can lead to findings that are inaccurate, misleading, and potentially counterproductive to the goal of eliminating CLABSIs.19,20 In contrast with the reported statewide ICU CLABSI rate of 1.21/1,000 central line–days, after validation we reported an estimated statewide ICU CLABSI rate of 1.54/1,000 central line–days. Our validated estimated ICU CLABSI rate remained slightly lower than the national pooled mean ICU CLABSI rate of 1.65/1,000 central line–days, derived from approximately 1,600 hospitals participating in NHSN in 2009.21
Our analysis adds to the evidence that variability in applying NHSN CLABSI surveillance definitions can result in underreporting. However, most of the participating facilities (75%) reported fully accurate data, as measured in this evaluation. The limited numbers of ICU CLABSIs and ICU-related bacteremia episodes in these hospitals (median, 10 medical records reviewed among the 33 hospitals) likely influenced this finding. Furthermore, despite 23 hospitals having had no ICU CLABSIs either before or after validation, this might have just as readily resulted from the numbers of patients at risk as from superior infection prevention programs.
We estimated the overall sensitivity of hospital reporting of CLABSIs to be 72%, which is higher than that reported in some other published CLABSI validation studies.13,14 Had we not discussed discordant findings before ultimately classifying ICU CLABSI status and instead used the health department reviewers’ initial ICU CLABSI determination as the reference standard, the measured sensitivity of hospital reporting would have been 60%. Our findings indicate that the magnitude of hospital underreporting of ICU CLABSIs is reduced if hospitals are given the opportunity to discuss specific cases that were classified discordantly by external reviewers.
The postreview discussion was a valuable training tool for both the hospitals and the OPHD. This process highlighted particularly complex aspects of NHSN CLABSI definitions that are prone to misinterpretation and allowed hospitals and OPHD external reviewers to fully incorporate all information necessary for CLABSI classification. Since applying NHSN surveillance definitions inevitably requires subjectivity, which erodes interrater reliability,22,23 discussing discordant findings provided mutually beneficial insight into diverse ways of interpreting clinical data. The discussion also enhanced the credibility and transparency of the CLABSI review process and provided a forum in which hospitals could comment and evaluate our findings.
Discussing discordant review findings also helps address complications posed by variable fluency in a hospital’s electronic health record (EHR) system. In our analysis, medical record review required accessing an EHR in 37 of 44 hospitals. Although EHRs can facilitate medical record reviews (eg, user-defined queries) that efficiently extract data of interest, an EHR entails a learning curve to determine relevant data efficiently and accurately, challenging external reviewers who might have to navigate through multiple, heterogeneous EHR systems. Hospital-based IPs might thus have an advantage in reviewing records, especially for patients with lengthy, complex stays.24
Mandatory CLABSI reporting can also have counterproductive consequences that warrant the attention of policymakers and quality leaders. First, meaningful comparisons of hospital quality are hampered by variability in case-finding thoroughness for detecting ICU CLABSIs, notably the extent to which IPs systematically and completely evaluate each ICU-related bacteremia episode as a potential CLABSI. Second, the CLABSI surveillance definition has been criticized as insufficiently specific and therefore lacking clinical credibility.25 Finally, correct application of NHSN HAI surveillance definitions requires substantive training and practice to develop proficiency. Time spent collecting data and ascertaining cases competes with other IP duties, which include implementing interventions to prevent HAIs.26
Our study had several limitations. First, because of resource constraints, we did not rigorously assess the accuracy of hospitals’ reported central line–days, which constitute the denominator for calculation of CLABSI rates.17 Systematic bias in reporting central line–days would have affected the accuracy of ICU CLABSI rates. Second, the proficiency of OPHD reviewers and hospital IPs in interpreting and applying NHSN CLABSI criteria was not systematically assessed before they reviewed medical records. Our reviewers and hospital IPs, however, likely typified the level of training, experience, and proficiency available in public health agencies and hospitals. Third, we did not discuss cases in which hospital personnel and OPHD reviewers concurred in their initial determinations, and we accepted that they were accurate. The true extent of CLABSIs might have been higher or lower if systematic, differential errors in classifying CLABSIs were shared by both parties. Reviewing every ICU-related bacteremia episode, however, was impractical. Fourth, the postreview discussion did not include a third-party expert to ensure that hospital and OPHD reviewers were interpreting and applying NHSN criteria correctly and not engaging in erroneous groupthink. However, we routinely consulted experts at CDC regarding complex or puzzling cases, and we deferred disposition of these cases, pending CDC adjudication. Finally, in accordance with the statewide mandate, our analysis and conclusions were restricted to CLABSIs attributed to ICU settings.
We conclude that validating hospital ICU CLABSI reporting improves accuracy of hospital-based CLABSI surveillance systems. Discussing discordant findings with hospitals increases the accuracy of identifying CLABSIs and promotes acceptability, credibility, and transparency of the review process. States considering ICU CLABSI validation should consider discussing findings with hospitals before concluding the presence of CLABSI.
We thank Diane Roy, Valerie Ocampo, Karen Lewis, Jessie Tenney, Sheryl Lyss, the Oregon and Southern Washington Chapter of the Association for Professionals in Infection Control and Epidemiology, and the 44 Oregon hospitals and their staffs who supported this investigation.
Financial support. This study was supported by CDC ARRA HAI ELC grant 280683.
Potential conflicts of interest. All authors report no conflicts of interest relevant to this article. All authors submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest, and the conflicts that the editors consider relevant to this article are disclosed here.
- 1. Laupland KB, Lee H, Gregson DB, Manns BJ. Cost of intensive care unit-acquired bloodstream infections. J Hosp Infect 2006;63(2):124–132.
- 2. Pittet D, Tarara D, Wenzel RP. Nosocomial bloodstream infection in critically ill patients: excess length of stay, extra costs, and attributable mortality. JAMA 1994;271(20):1598–1601.
- 3. Warren DK, Quadir WW, Hollenbeak CS, Elward AM, Cox MJ, Fraser VJ. Attributable cost of catheter-associated bloodstream infections among intensive care patients in a nonteaching hospital. Crit Care Med 2006;34(8):2084–2089.
- 4. Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med 2006;355(26):2725–2732.
- 5. Centers for Disease Control and Prevention. Reduction in central line–associated bloodstream infections among patients in intensive care units—Pennsylvania, April 2001–March 2005. MMWR 2005;54(40):1013–1016.
- 6. Centers for Disease Control and Prevention. Guidelines for the prevention of intravascular catheter-related infections. MMWR 2002;51(RR10):1–26.
- 7. US Department of Health and Human Services. Action plan to prevent healthcare-associated infections. http://www.hhs.gov/ash/initiatives/hai/actionplan/hhs_hai_action_plan_final_06222009.pdf. Published June 2009. Accessed March 3, 2012.
- 8. McKibben L, Horan TC, Tokars JI, et al. Guidance on public reporting of healthcare-associated infections: recommendations of the Healthcare Infection Control Practices Advisory Committee. Infect Control Hosp Epidemiol 2005;26(6):580–587.
- 9. Fung CH, Lim Y-W, Mattke S, Damberg C, Shekelle PG. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med 2008;148(2):111–123.
- 10. Association for Professionals in Infection Control and Epidemiology. HAI reporting laws and regulations: states that have enacted laws related to reporting of healthcare-associated infections. http://www.apic.org/Resource_/TinyMceFileManager/Advocacy-PDFs/HAI_map.gif. Accessed March 3, 2012.
- 11. Office for Oregon Health Policy and Research. Federal vs. state HAI hospital reporting. http://www.oregon.gov/OHA/OHPR/docs/HCAIAC/Reporting/Update_0611/FederalvsStateHAIReporting.pdf/. Accessed August 15, 2011.
- 13. Backman LA, Melchreit R, Rodriguez R. Validation of the surveillance and reporting of central line-associated bloodstream infection data to a state health department. Am J Infect Control 2010;38(10):832–838.
- 14. McBryde ES, Brett J, Russo PL, Worth LJ, Bull AL, Richards MJ. Validation of statewide surveillance system data on central line–associated bloodstream infection in intensive care units in Australia. Infect Control Hosp Epidemiol 2009;30(11):1045–1049.
- 15. New York State Department of Health. New York State hospital-acquired infection reporting system: pilot year—2007. http://www.health.state.ny.us/statistics/facilities/hospital/hospital_acquired_infections/. Published July 2008. Accessed August 15, 2011.
- 16. Office for Oregon Health Policy and Research. Oregon healthcare acquired infections. Salem: Office for Oregon Health Policy and Research; 2010.
- 18. Kelly H, Bull A, Russo P, McBryde ES. Estimating sensitivity and specificity from positive predictive value, negative predictive value and prevalence: application to surveillance systems for hospital-acquired infections. J Hosp Infect 2008;69(2):164–168.
- 19. McKibben L, Fowler G, Horan T, Brennan PJ. Ensuring rational public reporting systems for health care–associated infections: systematic literature review and evaluation recommendations. Am J Infect Control 2006;34(3):142–149.
- 20. Perla RJ, Peden CJ, Goldmann D, Lloyd R. Health care–associated infection reporting: the need for ongoing reliability and validity assessment. Am J Infect Control 2009;37(8):615–618.
- 21. Centers for Disease Control and Prevention. Vital signs: central line–associated blood stream infections—United States, 2001, 2008, and 2009. MMWR 2011;60(8):243–248.
- 22. Worth LJ, Brett J, Bull AL, McBryde ES, Russo PL, Richards MJ. Impact of revising the National Nosocomial Infection Surveillance System definition for catheter-related bloodstream infection in ICU: reproducibility of the National Healthcare Safety Network case definition in an Australian cohort of infection control professionals. Am J Infect Control 2009;37(8):643–648.
- 23. Mayer J, Howell J, Green T, et al. Assessing inter-rater reliability (IRR) of surveillance decisions by infection preventionists (IPs). In: Fifth Decennial International Conference on Healthcare-Associated Infections. Atlanta, GA. March 2010. Abstract 79.
- 24. Emori TG, Edwards JR, Culver DH, et al. Accuracy of reporting nosocomial infections in intensive-care–unit patients to the National Nosocomial Infections Surveillance System: a pilot study. Infect Control Hosp Epidemiol 1998;19(5):308–316.
- 25. Sexton DJ, Chen LF, Anderson DJ. Current definition of central line–associated bloodstream infection: is the emperor wearing clothes? Infect Control Hosp Epidemiol 2010;31(12):1286–1289.
- 26. Tokars JI, Richards C, Andrus M, et al. The changing face of surveillance for health care–associated infections. Clin Infect Dis 2004;39(9):1347–1352.