Access

You are not currently logged in.

Access your personal account or get JSTOR access through your library or other institution:

login

Log in to your personal account or through your institution.

Evaluating the Accuracy of Sampling to Estimate Central Line–Days: Simplification of the National Healthcare Safety Network Surveillance Methods

Nicola D. Thompson PhD MS, Jonathan R. Edwards MStat, Wendy Bamberg MD, Zintars G. Beldavs MS, Ghinwa Dumyati MD FSHEA, Deborah Godine RN CIC, Meghan Maloney MPH, Marion Kainer MBBS MPH, Susan Ray MD, Deborah Thompson MD MSPH FACPM, Lucy Wilson MD ScM and Shelley S. Magill MD PhD
Infection Control and Hospital Epidemiology
Vol. 34, No. 3 (March 2013), pp. 221-228
DOI: 10.1086/669515
Stable URL: http://www.jstor.org/stable/10.1086/669515
Page Count: 8
  • Download PDF
  • Cite this Item
Original Article

Evaluating the Accuracy of Sampling to Estimate Central Line–Days: Simplification of the National Healthcare Safety Network Surveillance Methods

Nicola D. Thompson, PhD, MS,1
Jonathan R. Edwards, MStat,1
Wendy Bamberg, MD,2
Zintars G. Beldavs, MS,3
Ghinwa Dumyati, MD, FSHEA,4
Deborah Godine, RN, CIC,5
Meghan Maloney, MPH,6
Marion Kainer, MBBS, MPH,7
Susan Ray, MD,8
Deborah Thompson, MD, MSPH, FACPM,9
Lucy Wilson, MD, ScM,10 and
Shelley S. Magill, MD, PhD1
1. Division of Healthcare Quality Promotion, Centers for Disease Control and Prevention, Atlanta, Georgia
2. Colorado Department of Public Health and Environment, Denver, Colorado
3. Oregon Health Authority, Portland, Oregon
4. University of Rochester, Rochester, New York
5. California Emerging Infections Program, Oakland, California
6. Connecticut Department of Public Health, Hartford, Connecticut
7. Tennessee Department of Health, Nashville, Tennessee
8. Georgia Emerging Infections Program, Atlanta, Georgia
9. New Mexico Department of Health, Santa Fe, New Mexico
10. Maryland Department of Health and Mental Hygiene, Baltimore, Maryland
    Address correspondence to Nicola D. Thompson, PhD, MS, Division of Healthcare Quality Promotion, Centers for Disease Control and Prevention, 1600 Clifton Road MS A-24, Atlanta, GA 30333 ().

Objective. To evaluate the accuracy of weekly sampling of central line–associated bloodstream infection (CLABSI) denominator data to estimate central line–days (CLDs).

Design. Obtained CLABSI denominator logs showing daily counts of patient-days and CLD for 6–12 consecutive months from participants and CLABSI numerators and facility and location characteristics from the National Healthcare Safety Network (NHSN).

Setting and Participants. Convenience sample of 119 inpatient locations in 63 acute care facilities within 9 states participating in the Emerging Infections Program.

Methods. Actual CLD and estimated CLD obtained from sampling denominator data on all single-day and 2-day (day-pair) samples were compared by assessing the distributions of the CLD percentage error. Facility and location characteristics associated with increased precision of estimated CLD were assessed. The impact of using estimated CLD to calculate CLABSI rates was evaluated by measuring the change in CLABSI decile ranking.

Results. The distribution of CLD percentage error varied by the day and number of days sampled. On average, day-pair samples provided more accurate estimates than did single-day samples. For several day-pair samples, approximately 90% of locations had CLD percentage error of less than or equal to ±5%. A lower number of CLD per month was most significantly associated with poor precision in estimated CLD. Most locations experienced no change in CLABSI decile ranking, and no location’s CLABSI ranking changed by more than 2 deciles.

Conclusions. Sampling to obtain estimated CLD is a valid alternative to daily data collection for a large proportion of locations. Development of a sampling guideline for NHSN users is underway.

Central line–associated bloodstream infections (CLABSIs) are healthcare-associated infections (HAIs) of significant public health importance and have become a prominent healthcare quality measure. CLABSI is the HAI most frequently subject to state mandatory reporting requirements,1 and the Centers for Medicare and Medicaid Services Inpatient Prospective Payment System rule now requires that participating hospitals report CLABSI surveillance data from intensive care unit (ICU) locations via the Centers for Disease Control and Prevention (CDC) National Healthcare Safety Network (NHSN) to receive their annual payment update.2

Current NHSN methodology for the collection of CLABSI denominator data requires a daily aggregate count of the number of patients (ie, patient-days) and number of patients with 1 or more central line in place (ie, central line–days [CLDs]) in the patient care location under surveillance.3 Although the use of CLD as the denominator for CLABSI rate calculation has been shown to be necessary,4 daily collection of CLDs is recognized to predominantly be a manual5 and labor-intensive process,6-8 which is a barrier to reporting the data needed to calculate CLABSI rates.4 With increasing demands for the reporting of HAI surveillance data, identifying valid surveillance methods that decrease the burden of data collection for NHSN users is a CDC priority.

Building upon earlier efforts that assessed the use of sampling to collect CLD,7,9 we sought to evaluate this approach in a large number of acute care hospitals and patient care locations. Our primary objective was to evaluate the accuracy of estimating CLD from weekly sampling of denominator data compared with actual CLD based on daily collection of denominator data, which is the current NHSN methodology. Our secondary objective was to identify facility and location characteristics associated with greater precision of estimated CLD.

Methods

Participants and Data Collection

Through the CDCs Emerging Infections Program infrastructure, a network of state health departments and their collaborators, a convenience sample of acute care hospitals located within 9 states (CA, CO, CT, GA, MD, NM, NY, OR, and TN) that performed NHSN CLABSI surveillance was identified for participation. Eligible patient care locations included critical care unit (ie, ICU), step-down, or ward locations. Neonatal intensive care units (NICUs) and specialty care areas were excluded because of differences in CLABSI denominator data collection methods.3 Enrolled facilities and locations that met inclusion criteria were asked to submit 6–12 consecutive months of their preexisting CLABSI denominator data logs from 2009 or 2010. Logs were required to show daily counts of patient-days and CLD.10 The number of CLABSI reported, corresponding to the period of denominator data submitted by participating locations, and facility and location characteristics were obtained from NHSN by CDC staff.

Data Analysis

Descriptive characteristics of participating facilities and locations were reported, with device utilization ratios (DUR) and CLABSI rates calculated using standard NHSN methods.3 For each location, 7 day samples of data were created, each including the denominator data collected on a specific day of the week. For example, the Monday day sample included only the patient-days and CLD collected on each Monday during the period from which denominator data were submitted. Using the data from each sample, a sample DUR was calculated and used to generate an estimate of CLD (sample actual patient-days reported) and an estimated CLABSI rate per 1,000 CLD (actual number of CLABSI reported/estimated ). Using the same method, analysis of day-pair samples (eg, combined data for 2 days, such as Sunday and Monday, Sunday and Tuesday, and Sunday and Wednesday) was performed for all 21 day-pair permutations of the 7 day samples. To assess the accuracy of sampling to obtain estimated CLD, we assessed which samples provided the most accurate estimates of CLD by generating and comparing box and whisker plots and tables showing the distribution (median; 5th, 25th, 75th, and 95th percentiles; and outliers) of the CLD percentage error (defined as the relative percentage difference between estimated and actual CLD) using the Kruskal-Wallis and Kolmogorov-Smirnov tests. Also, the proportion of locations with estimated CLD within 5% of the actual CLD (ie, CLD percentage error less than or equal to ±5%) for samples was compared. To identify characteristics associated with increased precision for estimated CLD, nonoutlier locations (defined as CLD percentage error less than or equal to ±5%) and outlier (defined as CLD percentage error greater than ±5%) locations were compared.

Finally, to evaluate the impact of using estimated CLD on CLABSI rates, we calculated the change in the CLABSI decile ranking for locations. Using the actual CLABSI rate, all locations were ranked in 10 evenly sized groups (deciles); when rates were tied (eg, locations where CLABSI rate was 0 per 1,000 CLD), locations were grouped together. This process was repeated for each sample using the estimated CLABSI rate (calculated using estimated CLD). Next, the decile rankings based on actual and estimated CLABSI rates were compared to determine the change in decile rank for each location (eg, if the location rank was 7th decile based on actual CLABSI rate and 6th decile for estimated CLABSI rate, the change in rank was 1 decile). To fully assess the impact of using estimated CLD on CLABSI rates, this was first performed including all locations and then performed including only locations that had a nonzero CLABSI rate.

The Pearson χ2 or Fisher exact test (when expected cell frequencies were less than 5) was used to assess categorical variables, and the Kruskal-Wallis test was used to assess median values of continuous variables. All tests were 2-sided, and P values of less than or equal to .05 were considered statistically significant. All statistical analysis was performed using SAS, version 9.2 (SAS).

Human Subjects Review

A protocol for this surveillance evaluation project was reviewed by the Office of the Director in the National Center for Emerging and Zoonotic Infectious Diseases at the CDC and determined to not be research involving human subjects.

Results

Description of Participating Hospitals and Locations, Assessment of Denominator Data Quality

Sixty-three acute care hospitals were enrolled and submitted 6–12 consecutive months of CLABSI denominator data logs for 119 locations. A total of 1,246 months of denominator data were included in the analysis, comprising 401,730 patient-days and 173,762 CLD. In total, 264 CLABSI were reported during this time period, for a pooled mean CLABSI incidence rate of 1.52 cases per 1,000 CLD. Descriptive characteristics of participating facilities, locations, and data submitted are presented in Table 1.

Table 1. 
Descriptive Characteristics of Participating Facilities, Locations, and Data Submitted
VariableFacility characteristics(n = 63)Location characteristics(n = 119)
Type, no. (%) of facilities or locations
 General hospital94 (59)
  Adult medical/surgical ICU42 (50)
  Other ICUa40 (48)
  Ward or step-down unit18 (21)
No. of beds135 (86–260)12 (8–22)
No. (%) of locations with 12 months of data submitted63 (75)
Total no. of patient-daysb2,786 (1,569–4,107)
Mean no. of patient-days/month262 (155–441)
Total CLDb1,169 (611–2,073)
Mean CLD/month123 (59–193)
Device utilization ratio0.45 (0.28–0.64)
No. of CLABSI1 (0–4)
CLABSI rate per 1,000 CLDb1.15 (0–2.38)

Overall, the quality of CLABSI denominator data was high, with more than 97% of days of denominator data submitted containing none of 4 data quality errors assessed (Table 2). The most frequently identified data quality error, found in 2.03% of records, was individual days for which both patient-days and CLD were not reported, primarily in 5 locations where denominator data was systematically not collected on Saturday or Sunday. Among locations, 71% had less than 1% of days with a data quality error.

Table 2. 
Data Quality Evaluation for 37,995 Days of Denominator Data Reported by 119 Locations
Data quality error criteria evaluatedNo. of days with error identified (%)
Patient-days and CLD missing770 (2.03)
Only patient-days missing10 (0.03)
Only CLD missing97 (0.26)
CLD greater than patient-days143 (0.38)
 Total1,020 (2.68)

Assessment of CLD Percentage Error

Inspection of box and whisker plots and comparison of the distributions of the CLD percentage error revealed differences among the 7 day samples (Figure 1, Table 3). The distribution of CLD percentage error differed significantly by day sample (Kruskal-Wallis ). Using the width of the 5th–95th percentile interval (range including 90% of locations), the greatest precision in the distribution of the CLD percentage error was observed for the Tuesday and Thursday samples. However, the proportion of locations with CLD percentage error less than or equal to ±5% for each day sample was not significantly different and ranged from 54% for Friday to 69% for Thursday ( by χ2; ).

Figure 1. 

Box and whisker plot showing percentile distribution for central line–day (CLD) percentage error for all day samples (119 locations; asterisk indicates 114 locations). Observations greater than 5 times the interquartile range for Friday have been removed for display purposes only (2 observations removed). F, Friday; M, Monday; Sa, Saturday; Su, Sunday; Th, Thursday; Tu, Tuesday; W, Wednesday.

Table 3. 
Percentile Distribution of Central Line–Day (CLD) Percentage Error and Locations with CLD Percentage Error Less than ±5% by Day and Day-Pair Sampled for 119 Participating Locations
Percentile distribution
Day or day-pair sampled50th, medianInterquartile range5th95th5th–95th rangePercentage (n) locations with CLD percentage error less than or equal to ±5%
Sua2.176.54−7.3819.1726.5563 (72)
M0.595.99−13.2217.1430.3665 (77)
Tu−1.476.55−10.0311.3621.3964 (76)
W−0.966.88−14.8912.0926.9862 (74)
Th−0.115.87−11.297.7119.0069 (82)
F−0.089.54−18.5112.0130.5254 (64)
Saa1.036.63−12.3011.3923.6965 (74)
SuMb1.206.06−6.6717.5324.2068 (81)
SuTu0.094.58−4.9311.3116.2482 (97)
SuW0.283.54−6.208.3714.5784 (100)
SuTh0.623.34−3.458.4111.8687 (103)
SuF1.134.73−11.887.6319.5176 (91)
SuSaa,b1.814.91−7.2612.9620.2267 (80)
MTub−0.745.49−7.4012.0319.4376 (91)
MW−0.574.22−8.668.2816.9477 (92)
MTh−0.122.97−4.216.2710.4892 (110)
MF−0.223.84−7.437.5514.9881 (96)
MSa0.864.07−5.8313.8919.7281 (96)
TuWb−1.055.08−9.346.6916.0371 (85)
TuTh−1.183.42−6.225.6011.8287 (103)
TuF−0.984.07−9.105.4614.5679 (94)
TuSa−0.103.33−4.015.739.7492 (109)
WThb0.865.03−10.677.5918.2675 (89)
WF−0.704.82−14.733.9918.7279 (94)
WSa−0.074.00−6.655.7712.4284 (100)
ThFa0.185.65−13.928.0621.9868 (81)
ThSa0.453.03−4.886.9211.8086 (102)
FSab0.374.97−12.139.3221.4570 (83)

The distribution of CLD percentage error differed significantly among the 21 day-pair samples (Kruskal-Wallis ). Using the width of the 5th–95th percentile interval, the greatest precision in the distribution of the CLD percentage error was observed for the Monday-Thursday and Tuesday-Saturday samples (Figure 2 and Table 3). Additionally, the proportion of locations with CLD percentage error less than or equal to ±5% was significantly different among day-pair samples ( by χ2; ), ranging from 67% for the Saturday-Sunday sample to 92% for the Monday-Thursday and Tuesday-Saturday samples. The average level of precision (ie, locations with CLD percentage error less than or equal to ±5%) among the 7 consecutive day-pair samples (Table 3) was significantly lower than for nonconsecutive day-pair samples (70% and 83%, respectively; by χ2; ).

Figure 2. 

Box and whisker plot showing percentile distribution for central line–day (CLD) percentage error for all 21 day-pair samples (119 locations; asterisk indicates 114 locations). Observations greater than 5 times the interquartile range for Sunday-Monday day-pair have been removed for display purposes only (10 observations removed). F, Friday; M, Monday; Sa, Saturday; Su, Sunday; Th, Thursday; Tu, Tuesday; W, Wednesday.

Characteristics Associated with Increased Precision of Estimated CLD

Assessment of characteristics associated with increased precision of estimated CLD was performed using the Thursday sample and the Monday-Thursday day-pair sample, which were the 2 samples that consistently yielded the most precise estimates of CLD (Table 4). For the Thursday sample, locations categorized as outliers (ie, those that had a CLD percentage error greater than ±5%) were significantly more likely to have fewer hospital beds and location beds, fewer patient-days and CLD per month, and lower DURs than nonoutliers (CLD percentage error less than or equal to ±5%). The strongest association () was observed for the average number of CLD per month and outlier status. For the Monday-Thursday day-pair sample, locations categorized as outliers were significantly more likely to have submitted less than 12 months of data and to have fewer CLD per month and lower DURs than nonoutliers. The strongest association () was observed for the average number of CLD per month and outlier status.

Table 4. 
Comparison of Characteristics for Nonoutlier (Central Line–Day [CLD] Percentage Error Less than or Equal to ±5%) and Outlier (CLD Percentage Error Greater than ±5%) Locations for Thursday Sample and Monday-Thursday Day-Pair Sample
Thursday sampleMonday-Thursday day-pair sample
CharacteristicNonoutlier(n = 82)Outlier(n = 37)PaNonoutlier(n = 110)Outlier(n = 9)Pa
Facility beds260 (120–394)150 (106–186).0054196 (111–394)117 (117–159).1289
ICU location, no. (%) of ICUs84 (69)78 (29).444984 (93)67 (6).1953
Location beds13 (10–23)9.5 (7–16).009912 (8–21)12 (11–29).3720
No. (%) of locations with 12 months of data submitted67 (55)54 (20).173366 (73)22 (2).0124
Mean no. of patient-days/month298 (189–464)166 (115–306).0026275 (165–448)234 (151–469).7707
Mean no. of CLD/month149 (86–204)63 (35–116)<.0001130 (66–191)56 (35–103).0092
Device utilization ratio0.50 (0.31–0.67)0.36 (0.17–0.55).00360.48 (0.29–0.66)0.24 (0.1–0.40).0163

Impact of Sampling on CLABSI Rate Ranking

When including all 119 locations, among day samples, the use of estimated CLD on CLABSI rates was minimal (Figure 3). No location experienced more than a 2-decile change in rank. The majority of locations, ranging from 83% for the Saturday sample to 93% for the Tuesday sample, had no change in their decile rankings, and differences among the day samples were not significant ( by χ2; ). Similarly, when only locations with a nonzero CLABSI rate were ranked, at least 78% of locations had no change in their decile rankings, and differences among the day samples were not significant ( by χ2; ).

Figure 3. 

Change in decile rank for central line–associated bloodstream infection (CLABSI) rate for all day samples and 7 selected day-pair samples for all 119 locations (single asterisk indicates 114 locations) and 84 locations (2 asterisks indicate 81 locations) with nonzero CLABSI rate. F, Friday; M, Monday; Sa, Saturday; Su, Sunday; Th, Thursday; Tu, Tuesday; W, Wednesday.

From among all 21 day-pair samples, the 7 most precise samples (those with the narrowest 5th–95th percentile range; Table 3) were evaluated. Again, the use of estimated CLD on CLABSI rates was minimal (Figure 3). No location changed more than 1 decile in rank. The majority, ranging from 92% for Tuesday-Saturday to 97% for Tuesday-Friday, had no change in their decile rankings, and differences among the day-pair samples evaluated were not significant ( by χ2; ). When only locations with a nonzero CLABSI rate were ranked, a majority (at least 83%) had no change in their decile rankings, and differences among the day-pair samples evaluated were not significant ( by χ2; ).

Discussion

Because daily manual collection of CLDs, used as the denominator for calculating CLABSI rates, is reported to be time-consuming and burdensome,5,6,8 we evaluated the accuracy of using weekly sampling to estimate CLD. Fixed sampling schedules, such as the same day or day-pair each week, were evaluated, because this is easier to implement than a random sample of days. Our results indicate sampling the same day or 2 days each week would yield estimated CLD within 5% of actual CLD (ie, CLD percentage error less than or equal to ±5%) for up to 69% and 92% of locations, respectively. However, significant differences were observed among the percentile distributions for CLD percentage error, which suggests that certain days or day-pairs provide more accurate estimates than others. Similar variation in sampling accuracy by day sampled has been previously reported,7,9 which suggests that this finding is not spurious. We speculate that such differences are likely related to patterns of patient admission and discharge throughout the week. However, differences in how data are collected (manual vs electronic) or who collects data from day to day (eg, different staff members) may also contribute. We identified that having a lower mean number of CLD per month was most significantly associated with inaccurate estimated CLD, which suggests that a threshold exists below which sampling will yield inaccurate estimates. Finally, the impact of using estimated CLD on CLABSI rates appeared to be minimal, with a large proportion of locations experiencing no change in their decile ranking and few locations moving more than 1 decile in their CLABSI rank.

Our findings add further evidence in support of the use of sampling for CLDs as a valid strategy to obtain denominator data used for CLABSI surveillance. Klevens et al9 showed that sampling 1 day per week yielded annual infection rates that were not meaningfully different from rates determined using daily denominator data collection. They also reported that sampling more days per week resulted in greater accuracy (smaller percentile error), but the marginal improvement diminished beyond sampling 2 days per week. However, their analysis included data from only 12 hospitals and 29 ICU locations, and not all day-pair permutations were evaluated. Evaluation of sampling 1 day per week was performed by Shelly et al,7 who focused on 38 non-ICU locations from 6 hospitals, and revealed that sampling 1 day per week generated estimated DURs that were not substantially different from DUR collected during validation months. Significant differences in accuracy by the day used for sampling were identified, and the impact on CLABSI rates, aggregated for all locations, was reported to be small.

Our evaluation of sampling to obtain estimates of CLD is unique for several reasons. First, we had a large number and variety of participants, including 63 facilities and 119 inpatient locations, representing ICUs and wards with a wide range in central line utilization. Second, we assessed all day samples and all 21 day-pair samples to evaluate which sampling strategies yielded the most accurate estimate of CLD. Third, we performed an analysis of characteristics associated with increased precision of estimated CLD. Fourth, we evaluated the impact of using estimated CLD on CLABSI rates by assessing the change in CLABSI decile rank.

Because of increasing demands for and uses of HAI surveillance data, changes to NHSN surveillance that favor simplification and reduced data collection burden are paramount. Implementation of sampling to generate estimated CLD is thought to yield substantial savings in staff time used to perform HAI data collection activities. For example, if it is estimated to take 15 minutes of staff time daily to collect CLD data in a single location, the daily collection of CLD data over the course of 1 year uses approximately 92 staff-hours. Comparatively, for the same location, sampling just 1 day per week over the course of 1 year uses approximately 13 staff-hours (). Thus, the implementation of sampling 1 day per week yields an annual time savings of approximately 78 hours, which is an 86% reduction in staff time. Similarly, sampling 2 days per week would require approximately 26 hours () of staff time, saving approximately 65 hours per year, which is a 72% reduction in staff time.

Additional gains in the efficiency of data collection for HAI surveillance and quality measurement are anticipated through the Health Information Technology for Economic and Clinical Health Act and federal standards for “meaningful use” of electronic health record (EHR) systems.11,12 Although uptake of EHR systems is increasing,13 most are not developed with the purpose of obtaining HAI surveillance data.14 Ad hoc development of automated methods to collected CLDs from the EHR have proven successful.15-18 Thus far, they have been limited in scope to single institutions or systems and have not been tested on a wide scale, and they require significant staff resources and expertise and extensive validation to ensure accuracy. Until widespread adoption and refinement of EHRs for the collection and collation of data for HAI surveillance purposes, the use of manual sampling appears to be an accurate alternative that reduces data collection burden.

There are limitations to our evaluation and the application of our findings. We did not assess the use of sampling in NHSN locations classified as NICUs and specialty care areas. These locations have the most time-intensive CLABSI denominator data collection methodology, with CLD data collected by line type (permanent and temporary) or infant birth weight (5 strata).3 Recent elimination of umbilical and central line strata in NICUs has simplified and reduced data collection burden; whether data collection burden can be further reduced by sampling to estimate CLD in NICUs and in specialty care areas remains unknown. More precise sampling strategies beyond those evaluated here may exist. For example, it is likely that sampling 3 or more days per week would result in even greater precision than the day-pair samples we evaluated; however, the additional accuracy potentially achievable with such a strategy may be offset by the additional data collection burden. It is possible that inaccurate estimates of estimated CLD for some locations could be attributed to substandard denominator data collection practices. Although the data quality errors that we assessed were infrequent, and no systematic over-counting of CLD was identified, we could not assess all data collection errors (eg, under counting of CLD and variability by staff member) or quantify the impact of inaccurate data collection on sampling. Finally, we did not directly assess whether sampling methodology can be successfully implemented. A second phase of this sampling evaluation is underway to assess the feasibility of implementing CLD data collection using weekly sampling over a period of 6–12 months.

These findings suggest weekly sampling to obtain estimates of CLD is a valid alternative to daily collection of CLABSI denominator data for a majority of, but not all, locations. These results are encouraging and represent a major step toward augmenting current NHSN surveillance methods for the collection of CLABSI denominator data. Our results will be used, along with those from an evaluation of implementing weekly sampling and analysis of the accuracy of weekly sampling for other device-days, to develop guidance to implement sampling into NHSN in the near future. Pending widespread adoption of EHRs for HAI data collection, weekly sampling is a simplified and less resource intense method that yields accurate estimates of CLD used for calculating CLABSI rates.

Acknowledgments

We thank all participating acute care hospitals and their staff for contributing data to enable this evaluation.

Financial support. This evaluation was funded by the Centers for Disease Control and Prevention Emerging Infections Program Cooperative Agreement.

Potential conflicts of interest. All authors report no conflicts of interest relevant to this article. All authors submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest, and the conflicts that the editors consider relevant to this article are disclosed here.

References

  1. 1. Fridkin S, Olmstead R. Meaningful measure of performance: a foundation built on valid, reproducible findings from surveillance of health care-associated infections. Am J Infect Control 2011;39:87–90.
  2. 2. National Health and Safety Network Web site. http://www.cdc.gov/nhsn/cms-welcome.html. Accessed June 5, 2012.
  3. 3. National Health and Safety Network central line–associated bloodstream infection (CLABSI) event. http://www.cdc.gov/nhsn/PDFs/pscManual/4PSC_CLABScurrent.pdf. Accessed June 5, 2012.
  4. 4. Tokars JI, Klevens RM, Edwards JE, Horan TC. Measurement of the impact of risk adjustment for central line–days on interpretation of central line–associated bloodstream infection rates. Infect Control Hosp Epidemiol 2007;28:1025–1029.
  5. 5. Talbot TR, Chernetsky Tejedor S, Greevy RA, et al. Survey of infection control programs in a large national healthcare system. Infect Control Hosp Epidemiol 2007;28:1401–1403.
  6. 6. Burke JP. Infection control: a problem for patient safety. N Engl J Med 2003;348:651–656.
  7. 7. Shelly MA, Concannon C, Dumyati G. Device use ratio measured weekly can reliability estimated central line-days for central line-associated bloodstream infection rates. Infect Control Hosp Epidemiol 2011;32:727–730.
  8. 8. Voges KA, Webb D, Lauren L, Fish LL, Kressel AB. One-day point-prevalence survey of central, arterial, and peripheral line use in adult inpatient. Infect Control Hosp Epidemiol 2009;30:606–608.
  9. 9. Klevens RM, Tokars JI, Edwards JR, Horan TC. Sampling for collection of central line-day denominators in surveillance of healthcare-associated bloodstream infections. Infect Control Hosp Epidemiol 2006;27:338–342.
  10. 10. National Health and Safety Network. Denominators for Intensive Care Unit (ICU)/Other Locations (not NICU or SCA). http://www.cdc.gov/nhsn/forms/57.118_DenominatorICU_BLANK.pdf. Accessed June 6, 2012.
  11. 11. Atreja A, Gordon SM, Pollock DA, Olmsted RM, Brennan PJ; the Healthcare Infection Control Practices Advisory Committee. Opportunities and challenges in utilizing electronic health records for infection surveillance, prevention, and control. Am J Infect Control 2008;36(suppl):S37–S46.
  12. 12. Jha AK. The promise of electronic records around the corner or down the road? JAMA 2011;306:880–881.
  13. 13. DesRoches CM, Worzala C, Joshi MS, Kralovec PD, Jha AK. Small, nonteaching, and rural hospitals continue to be slow in adopting electronic health record systems. Health Affairs 2012;5:1092–1099.
  14. 14. Jha AK, Classen DC. Getting moving on patient safety: harnessing electronic data for safer care. N Eng J Med 2011;365:1756–1758.
  15. 15. Trick WE, Chapman WW, Wisniewski MF, Peterson BJ, Solomon SL, Weinstein RA. Electronic interpretation of chest radiograph reports to detect central venous catheters. Infect Control Hosp Epidemiol 2003;24:950–954.
  16. 16. Wright MO, Fisher A, John M, Reynolds K, Peterson LR, Robicsek A. The electronic medical record as a tool for infection surveillance: successful automation of device-days. Am J Infect Control 2009;37:364–370.
  17. 17. Hota B, Harting B, Weinstein RA, Lyles RD, Bleasdale SC, Trick W; Centers for Disease Control and Prevention Epicenters. Electronic algorithmic prediction of central vascular catheter use. Infect Control Hosp Epidemiol 2010;31:4–11.
  18. 18. Chernetsky Tejedor SG, Garrett G, Jacob J, et al. Electronic documentation of central line-days: validation is essential. In: Program and abstracts of the 2011 Society for Healthcare Epidemiologists of America (SHEA) Annual Meeting. Dallas, TX: SHEA, 2011. Abstract 4202. http://shea.confex.com/shea/2011/webprogram/Paper4202.html. Accessed June 6, 2012.

Acknowledgments

We thank all participating acute care hospitals and their staff for contributing data to enable this evaluation.

Financial support. This evaluation was funded by the Centers for Disease Control and Prevention Emerging Infections Program Cooperative Agreement.

Potential conflicts of interest. All authors report no conflicts of interest relevant to this article. All authors submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest, and the conflicts that the editors consider relevant to this article are disclosed here.

References

  1. 1. Fridkin S, Olmstead R. Meaningful measure of performance: a foundation built on valid, reproducible findings from surveillance of health care-associated infections. Am J Infect Control 2011;39:87–90.
  2. 2. National Health and Safety Network Web site. http://www.cdc.gov/nhsn/cms-welcome.html. Accessed June 5, 2012.
  3. 3. National Health and Safety Network central line–associated bloodstream infection (CLABSI) event. http://www.cdc.gov/nhsn/PDFs/pscManual/4PSC_CLABScurrent.pdf. Accessed June 5, 2012.
  4. 4. Tokars JI, Klevens RM, Edwards JE, Horan TC. Measurement of the impact of risk adjustment for central line–days on interpretation of central line–associated bloodstream infection rates. Infect Control Hosp Epidemiol 2007;28:1025–1029.
  5. 5. Talbot TR, Chernetsky Tejedor S, Greevy RA, et al. Survey of infection control programs in a large national healthcare system. Infect Control Hosp Epidemiol 2007;28:1401–1403.
  6. 6. Burke JP. Infection control: a problem for patient safety. N Engl J Med 2003;348:651–656.
  7. 7. Shelly MA, Concannon C, Dumyati G. Device use ratio measured weekly can reliability estimated central line-days for central line-associated bloodstream infection rates. Infect Control Hosp Epidemiol 2011;32:727–730.
  8. 8. Voges KA, Webb D, Lauren L, Fish LL, Kressel AB. One-day point-prevalence survey of central, arterial, and peripheral line use in adult inpatient. Infect Control Hosp Epidemiol 2009;30:606–608.
  9. 9. Klevens RM, Tokars JI, Edwards JR, Horan TC. Sampling for collection of central line-day denominators in surveillance of healthcare-associated bloodstream infections. Infect Control Hosp Epidemiol 2006;27:338–342.
  10. 10. National Health and Safety Network. Denominators for Intensive Care Unit (ICU)/Other Locations (not NICU or SCA). http://www.cdc.gov/nhsn/forms/57.118_DenominatorICU_BLANK.pdf. Accessed June 6, 2012.
  11. 11. Atreja A, Gordon SM, Pollock DA, Olmsted RM, Brennan PJ; the Healthcare Infection Control Practices Advisory Committee. Opportunities and challenges in utilizing electronic health records for infection surveillance, prevention, and control. Am J Infect Control 2008;36(suppl):S37–S46.
  12. 12. Jha AK. The promise of electronic records around the corner or down the road? JAMA 2011;306:880–881.
  13. 13. DesRoches CM, Worzala C, Joshi MS, Kralovec PD, Jha AK. Small, nonteaching, and rural hospitals continue to be slow in adopting electronic health record systems. Health Affairs 2012;5:1092–1099.
  14. 14. Jha AK, Classen DC. Getting moving on patient safety: harnessing electronic data for safer care. N Eng J Med 2011;365:1756–1758.
  15. 15. Trick WE, Chapman WW, Wisniewski MF, Peterson BJ, Solomon SL, Weinstein RA. Electronic interpretation of chest radiograph reports to detect central venous catheters. Infect Control Hosp Epidemiol 2003;24:950–954.
  16. 16. Wright MO, Fisher A, John M, Reynolds K, Peterson LR, Robicsek A. The electronic medical record as a tool for infection surveillance: successful automation of device-days. Am J Infect Control 2009;37:364–370.
  17. 17. Hota B, Harting B, Weinstein RA, Lyles RD, Bleasdale SC, Trick W; Centers for Disease Control and Prevention Epicenters. Electronic algorithmic prediction of central vascular catheter use. Infect Control Hosp Epidemiol 2010;31:4–11.
  18. 18. Chernetsky Tejedor SG, Garrett G, Jacob J, et al. Electronic documentation of central line-days: validation is essential. In: Program and abstracts of the 2011 Society for Healthcare Epidemiologists of America (SHEA) Annual Meeting. Dallas, TX: SHEA, 2011. Abstract 4202. http://shea.confex.com/shea/2011/webprogram/Paper4202.html. Accessed June 6, 2012.