Association between hospital accrediting agencies and hospital outcomes of care in the United States
Original Article

Association between hospital accrediting agencies and hospital outcomes of care in the United States

Mark Kato1, Dimitrios Zikos2

1Central Michigan University, Bay City, MI, USA; 2Central Michigan University, Mt. Pleasant, MI, USA

Contributions: (I) Conception and design: Both authors; (II) Administrative support: Both authors; (III) Provision of study materials or patients: Both authors; (IV) Collection and assembly of data: Both authors; (V) Data analysis and interpretation: Both authors; (VI) Manuscript writing: Both authors; (VII) Final approval of manuscript: Both authors.

Correspondence to: Mark Kato. Central Michigan University, 4789 Spitler Dr., Bay City, MI 48706, USA. Email: kato.mark@outlook.com.

Background: Hospital accreditation standards function as the structure to which hospitals must meet to receive reimbursement. There are four independent hospital accrediting organizations in the United States: The Joint Commission, Det Norske Veritas Healthcare (DNV), the Center for Improvement in Healthcare Quality (CIHQ), and the Healthcare Facilities Accreditation Program (HFAP).

Methods: A cross-sectional analysis was completed to examine (I) differences in disease-specific 30-day mortality, and various Hospital Acquired Infection (HAI) rates across hospitals accredited by different agencies, and (II) whether one or more of the agencies are associated with any of these outcomes, after controlling for hospital structure characteristics. Results are anticipated to provide invaluable information to hospital decision makers about accreditation agency choices and quality of care. Hospital demographic data was consolidated from the 2018 American Hospital Association (AHA) database and Center for Medicare and Medicaid Services (CMS)’s Hospital Compare database for analysis.

Results: According to post-hoc comparisons with Tukey HSD, the mean chronic obstructive pulmonary disease (COPD) and heart failure (HF) mortality differ in a statistically significant way between hospitals accredited by the Joint Commission and DNV, and Joint Commission and HFAP respectively. All other 30-day mortality and HAI outcomes were not found to be different across the three accrediting agencies. After controlling for several hospital structure variables, only the DNV-accreditation status was found to be associated with an increase to the 30-day COPD mortality (b=0.225, P<0.01). No agency was associated with the 30-day HF mortality or the central line-associated bloodstream infection (CLABSI) infection rates.

Conclusions: As the healthcare industry looks to reduce costs and improve outcomes, accreditation agencies must play an important role. As CMS, and leaders continue to evaluate and implement policies to improve efficacy, hospital accreditation agencies will need to revisit their focus and the processes they influence in hospitals. The healthcare industry should evaluate the current Conditions of Participation (CoP) and processes to better align the accreditation process with improving patient outcomes.

Keywords: Hospital accreditation; Joint Commission; hospital outcomes; hospital acquired conditions


Received: 20 March 2021; Accepted: 08 July 2021; Published: 25 June 2022.

doi: 10.21037/jhmhp-21-24


Introduction

Purpose

Improving outcomes, reducing harm, and decreasing costs in care have been at the forefront of healthcare leaders’ minds for decades. The focus on quality came to a head in 1999 after the release of “To Err is Human” asserting that 44,000 to 98,000 people die each year attributing to billions in hospital cots due to errors resulting from poor processes (1). Hospital practices and processes are evaluated through hospital accreditation, which reviews process performance, adherence to environmental standards, and its ability to continually improve. The Center for Medicare and Medicaid Services (CMS) included in their Conditions of Participation (CoP) that hospitals be accredited by a CMS approved agency or pass a State Health and Human Services (SHHS) inspection to receive Medicare and Medicaid funding (2). Hospital accreditation standards function as the structure to which hospitals must meet the CMS CoP to receive reimbursement. These CoP processes help form the basis for care processes in hospitals and are important in designing safe and effective care. The objective of this research is to determine if there is a significant difference in hospital quality scores across hospitals that utilize different accrediting bodies. In that respect, the study does not seek to study the effect of accreditation vs. non-accreditation on quality, but instead compares hospitals accredited by different agencies. CMS has mandated since its inception that hospitals must meet CoP to be eligible for reimbursement from CMS programs. Hospital reimbursement is now shifting to value-based care that rewards performance in quality measures by CMS. The accreditation process seeks to measure and evaluate physical plant standards, administrative and clinical processes, and understand the outcomes of care in episodes that are analyzed. The survey process currently utilized by the various agencies uses the patient care tracer methodology which evaluates a patient’s journey of care and the collaboration among the different patient care areas.

Physicians, nurses, ancillary staff, and administrators spend a significant amount of time and expense keeping up to date on the administrative tasks of accreditation but at what benefit to patient care? As healthcare leaders search for ways to reduce costs and improve outcomes, accreditation agencies will need to be a trusted partner going forward.

Scope

The objective is to study how hospitals accredited by different independent agencies perform against one another. This knowledge will help hospital decision makers understand the role of the accreditation selection in terms of quality, and if there is any association between specific agencies and quality. Much of the previous literature evaluates hospital outcomes for hospitals that utilize Joint Commission and those that are accredited by SHHS. Other studies also compare only one accrediting body against all other peers. Many of the current studies evaluate the differences between accredited hospitals compared to non-accredited hospitals throughout the world. There is less known literature on how hospitals perform when comparing outcome measures across hospital accredited by different agencies. This research compares the entities in metrics utilized in the CMS pay for performance hospital programs as including SHHS outcomes will introduce significant variation.

Research questions

Research question 1

1(a): Is there a statistically significant difference in the Hospital Acquired Infection (HAI) standardized infection ratio (SIR) rates across hospitals accredited by different independent accrediting agencies? [analysis of variance (ANOVA) and post-hoc tests].

1(b): Is there a significant association between the (I) HAI SIR rates and the (II) Healthcare Facilities Accreditation Program (HFAP) and Det Norske Veritas Healthcare (DNV) hospitals against Joint Commission ones, after controlling for hospital structure characteristics? [multiple linear regression (MLR)].

Research question 2

2(a): Is there a difference in the 30-day mortality rates across hospitals accredited by different independent accrediting agencies? (ANOVA and post-hoc tests).

2(b): Is there a significant association between the (I) 30-day mortality rates and the (II) HFAP and DNV hospitals against Joint Commission ones, after controlling for hospital structure characteristics? (MLR).

Hospital accreditation

Accreditation survey teams visit hospitals every twelve months to three years to evaluate clinical care and administrative processes. Typically, less than one week is spent assessing a hospital. Accreditation teams review the hospital’s processes to ensure they are in line with the agency’s standards. Hospital accrediting agencies typically employ the tracer methodology which focuses on evaluating processes of care throughout a hospital. There are different types of tracers deployed by accreditation teams. Patient tracers focus on individual patients and care they received. Program specific tracers review care processes within a clinical discipline. System tracers review the institution’s processes from data management to infection control (3). The accreditation tracer team is often comprised of a nurse, physician, administrator, and often a facility engineer. Maintaining accreditation readiness leads to increased costs and decreased staff morale as it is time intensive.

A recent study found that of 4,400 hospitals in the United States 3,337 were accredited and 1,063 underwent a State review (4). Today, there are four independent accrediting organizations in the United States: The Joint Commission, DNV, the Center for Improvement in Healthcare Quality (CIHQ), and the HFAP with minimal literature comparing patient safety outcomes amongst the different organizations (5). If a hospital does not use one of the four accrediting bodies, they must be reviewed by their SHHS. Currently, the industry leader is the Joint Commission who accredits almost 3,000 hospitals throughout the United States, followed by DNV and HFAP. In 2011, the newest accrediting agency, CIHQ, was given approval by CMS to accredit hospitals.

CMS is transforming payments to reward hospitals through pay for performance programs. CMS’s Quality Strategy (6) “is to optimize health outcomes by improving quality and transforming the health care system”. The Affordable Care Act (ACA) authorized CMS to utilize three major programs to evaluate quality, cost, and patient satisfaction: Hospital Value Based Purchasing (HVBP), Hospital Readmissions Reduction Program (HRRP), and the Hospital Acquired Conditions Reduction program (HACRP) (6). CMS uses the results to adjust the Diagnostic Related Grouping (DRG) payment ratios of hospitals and may impact up to 6% of the hospital’s total reimbursement.

Prior research outlined that poor quality and medical errors are a significant contributor to the rising costs of healthcare. A 2009 study conducted by Fuller et al., found that potentially preventable complications added 9.4–9.7% in costs to the California and Maryland healthcare systems. Extrapolating to the entire country, in 2006 an additional $88 Billion in waste was in the healthcare system (7).

McFadden et al. found the most significant gap in reducing errors was system redesigns. System redesigns include changing primary care practices to patient centered medical homes or shifting surgical procedures to an outpatient setting. They highlighted the Joint Commission’s emphasis on patient safety to reducing errors. The conclusion was hospital accreditation must play a major role in reducing errors (8).

In 2018, Griffith reviewed the approach of accrediting agencies and proposed a methodology in line with other industries. The proposed approach should focus on quality outcome performance, a performance improvement plan, audited financial statements, ability to address community needs, and the ability to identify and mitigate risks within the hospital (9). A study by Brasure et al. found rural hospitals were less likely to be accredited by the Joint Commission than urban hospitals due to high accreditation fees (10).

In 2003 Chen et al. examined the association between Joint Commission accredited hospitals and quality and survivability of patients admitted for acute myocardial infarction (AMI). Joint Commission hospitals had higher performance on AMI processes of care. Hospitals with the highest level of Joint Commission accreditation also had lower AMI mortality rates. The different levels of Joint Commission accreditation were not indicative of quality (11).

Moffett and Bohara analyzed patient outcomes from the Healthcare Cost and Utilization Project (HCUP) to evaluate the effectiveness of Joint Commission accrediting inspections (12). The analysis revealed a significant positive association to mortality performance and accreditation. The data also revealed that mortality rate performance was associated with the amount of time since the last accreditation inspection.

In 2007 Longo et al. developed a survey to assess the impact of hospitals’ efforts to implement safe practices for care and found that hospitals accredited by the Joint Commission had higher scores in safety practices than hospitals that were not accredited (13). Schmaltz et al. completed a longitudinal to determine if an association existed between Joint Commission hospitals and outcomes compared to non-Joint Commission hospitals. The study was comprised of 3,891 hospitals from 2004–2008 comparing 16 different quality measures. Joint Commission hospitals had superior performance by 2008 and improved more incrementally over the five-year period compared to the other hospitals in the study (14). Lam et al. in 2018 found that accredited hospitals had a difference in 30-day mortality rates compared to hospitals that are not accredited (4). Accredited hospitals perform better in 30-day readmission rates for medical conditions but not surgical conditions. Joint Commission outcomes were further analyzed to other accrediting agencies but direct comparison of the four agencies was provided. Despite those differences between accredited and non-accredited hospitals, no difference was observed in the mortality rates or readmission rates of Joint Commission hospitals compared to hospitals accredited by other agencies.

The literature has been conclusive that accredited hospitals have better quality scores than those hospitals that are not (4,11,12,14). CMS knew the standards set by agencies like the Joint Commission would be essential in maintaining standards and improving patient quality. The cost associated with accreditation fees, and preparedness can be overwhelming creating an unstainable cost structure (15).Leaders need to be confident that the standards and adherence to processes provides patients and clinicians with the best outcomes. Costs associated with independent accreditation kept some hospitals from accreditation (9).

While most of the aforementioned studies primarily focus on comparing outcomes between accredited and non-accredited organizations, there is scarce literature comparing the hospital outcomes of care between hospitals accredited by different agencies. For this reason, we designed this research to examine if there is a significant difference in the 30-day mortality and Hospital Acquired Infection (HAI) rates across the accrediting bodies in the United States, and further examine the association between the statuses of being accredited by different agencies, with the two aforementioned hospital outcomes of care. We are motivated by the ongoing transition where hospital reimbursement is shifting to value-based care that rewards performance in quality; for this reason, hospitals spend a significant amount of time and expense keeping current on accreditation but at what benefit to patients? To assess the effectiveness of hospital outcomes, we chose to determine if there were significant associations in hospital acquired infections and hospital mortality rates with a hospital’s accrediting agency. These outcomes have been selected since they are associated with a hospital’s processes of care and are major components of CMS’s pay-for-performance based programs. While the goal of this research is not to find causal associations, findings can inform future studies that will pinpoint to the hospital decision making process in choosing accrediting bodies. We present the following article in accordance with the MDAR reporting checklist (available at https://jhmhp.amegroups.com/article/view/10.21037/jhmhp-21-24/rc).


Methods

The study followed a cross-sectional retrospective design to determine (I) if there is any difference in disease-specific 30-day mortality rates, and various HAI rates across hospitals accredited by different agencies, and (II) whether one or more of the examined accreditation agencies are associated with any of the above two outcomes, after controlling for hospital structure characteristics. This study used the hospital as the unit of analysis. The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013).

To complete this study, hospital demographic data was consolidated from the 2018 American Hospital Association (AHA) database and CMS’s Hospital Compare database. More specifically, the study leveraged publicly reported measures through the Hospital Compare portal (www.medicare.gov/care-compare). Hospital performance is measured by the current CMS quality-based payment programs since the implementation of the ACA: HVBP, HRRP, and HACRP which utilizes HAI measures from AHRQ. The mortality data utilizes a three-year time frame from July 2015 through June of 2018 and is taken from the Complications and Deaths dataset on Hospital Compare. The data set was further filtered down to only include the six mortality measures that are aggregated by CMS along with hospital demographic data. Mortality data is compiled by CMS using Medicare claims data to calculate a hospital’s mortality rate (16). The rate is the risk adjusted to account for a patient’s age and prior medical history based on diagnosis coding contained in their claims data. The patient safety measures in the Healthcare Associated Infections data set were then filtered to only include the HAI SIR data only. Healthcare infection data is captured by the CDC through the NHSN data collection protocols (17). The HAI SIR rate is also a risk adjusted measure that factors in hospital information and patient demographics. The Central Michigan University Institutional Review Board found that this study was exempt as no human subjects were involved.

The 2018 AHA database was used to provide the study’s independent variables for the multivariate analysis. AHA collects demographic and other information annually from hospitals and healthcare organizations throughout the United States through a voluntary survey process (18). Specifically, AHA data were utilized to identify the hospital’s accrediting agency, number of beds, medical and surgical volumes, full time equivalents (FTEs), critical care capacity, non-acute care services, hospital system membership, and various other demographics.

The two different data files were merged to complete the analysis. Hospitals throughout the United States have a unique Medicare payment identification number. The Medicare ID was used to match hospital outcome data in the Hospital Compare data files to demographic data from the AHA dataset. The datasets were merged utilizing the Medicare ID as the primary key to create a comprehensive dataset for analysis.

Inclusion-exclusion criteria

The final data file included all hospitals that participated in the 2018 AHA survey and completed the accreditation information portion of the survey identifying accreditation by one of the four independent agencies. Any hospital that identified more than one agency was excluded from the analysis. The final hospital inclusion criteria for the analysis included only those hospitals that were identified as General Medical Surgical hospitals within the AHA data file. This criterion excluded those hospitals that specialize in Inpatient Rehabilitation, Behavioral Health, Cancer hospitals, Children’s hospitals, etc. to remain consistent with pay-for-performance programs (6). Data elements in the AHA dataset that did not have complete data for every hospital were also removed from the analysis. Hospitals not accredited by any of the four independent agencies were also removed, since the research is focusing on comparing the accrediting bodies rather than the accreditation process in general. Additionally, since there were only 13 hospitals accredited by CIHQ, these were removed from the comparison and multivariate analysis of this study and are only reported in descriptive statistics. The final dataset was comprised of N=2,764 total hospitals.

Statistical analysis

Descriptive analysis was conducted with the aim to profile the characteristics of each agency. Furthermore, one-way ANOVA was used to assess if the outcomes under study are significantly different among the agency groups. For the statistically significant ANOVA tests, pairwise analysis (Tukey post-hoc) was furthermore used to assess if the outcomes under study are significantly different between pairs of the accrediting bodies. Finally, separate MLR analyses were conducted, one for each outcome previously found to differ significantly across the accrediting bodies per ANOVA results: the MLR aimed to study the association of the accreditation agencies on the outcomes of interest, after controlling for the following hospital characteristics: hospital teaching status, number of hospital admissions, nursing and physician FTEs, nursing FTE to bed ratios, proportion of isolation rooms, critical access status and rural referral center status. In all MLRs we chose Joint Commission as the reference agency to examine how the other two agencies perform against the established largest agency in the United States. All statistical tests were conducted at the level of 95% statistical significance. The analysis was completed with the statistical software SPSS version 25. Table 1 shows the study variables, and their operational definitions.

Table 1

Study variables and data types, operational definitions, and data sources

Variable Operational definition Type Source
Main independent variable
   Accrediting agency The independent accrediting agency: HFAP, DNV, Joint Commission, CIHQ Categorical AHA Database
Dependent variables: patient safety measures
   Central line associated bloodstream infection The number of observed infections compared to the expected amount of central line infections Continuous Hospital Compare
   Catheter associated urinary tract infections The number of observed infections compared to the expected amount of catheter infections Continuous
   Surgical site infection—colon surgery The number of observed infections compared to the expected amount post colon surgical infections Continuous
   Surgical site infection—abdominal hysterectomy The number of observed infections compared to the expected amount post hysterectomy infections Continuous
   Methicillin-resistant Staphylococcus aureus (MRSA) bacteremia The number of observed infections for a hospital compared to the expected amount of MRSA infections Continuous
   Clostridium difficile (C. Diff) The number of observed infections for a hospital compared to the expected amount of C Diff cases Continuous
Dependent variables: 30-day mortality rates
   30-day AMI mortality rate 30-day death rates for patients diagnosed with AMI Continuous Hospital Compare
   30-day PN mortality rate 30-day death rates for patients diagnosed with PN Continuous
   30-day HF mortality rate 30-day death rates for patients diagnosed with HF Continuous
   30-day CABG mortality rate 30-day death rates for patients diagnosed with CABG Continuous
   30-day COPD mortality rate 30-day death rates for patients diagnosed with COPD Continuous
   30-day STK mortality rate 30-day death rates for patients diagnosed with STK Continuous
Control variables utilized in multivariate analysis
   Admissions Number of reported admissions Continuous AHA Database
   FTEs Number of FTEs Continuous
   Total surgical operations Number of inpatient and outpatient surgical operations Continuous
   Total hospital beds Number of beds in the facility Continuous
   Medical surgical ICU Does the hospital have intensive care unit Categorical
   Other intensive care Does the hospital have a specialty intensive care unit Categorical
   Infection isolation rooms Number of infection isolation rooms in the hospital Continuous
   Physician & dentist FTEs Number of full-time doctors and dentists Continuous
   Rural referral center Hospital categorization Categorical
   Critical access hospital Hospital categorization Categorical
   Sole community provider Hospital categorization Categorical
   Council of teaching hospital Is the hospital a teaching hospital Categorical
   Physical rehabilitation care Does the hospital have physical rehabilitation care unit Categorical
   Nurse FTE to bed ratio Number of nursing FTEs divided by number of beds Continuous

HFAP, healthcare facilities accreditation program; DNV, det norske veritas healthcare; CIHQ, center for improvement in healthcare quality; AHA, American Hospital Association; AMI, acute myocardial infarction; PN, pneumonia; HF, heart failure; CABG, coronary artery bypass graft; COPD, chronic obstructive pulmonary disease; STK, stroke; FTE, full time equivalent; ICU, intensive care unit.


Results

Descriptive analysis was conducted to profile the characteristics of hospitals accredited by each of the four agencies (Table 2). Joint Commission accredited N=2,318 of hospitals, HFAP N=97, DNV N=336, and CIHQ N=13. Hospitals accredited by the Joint Commission were consistently larger than hospitals accredited by the other agencies. Joint Commission had the most admissions M=10,420, the highest number of beds M=225, and the total number of operations M=8,589. The larger facilities accredited by Joint Commission contributed to higher numbers of physician and nursing FTEs in hospitals and the highest proportion of teaching hospitals. The hospitals accredited by DNV were the next largest with admissions M=8,510, beds M=188, and total surgical operations M=6,918. DNV hospitals were found to have the highest proportion of Critical Access Hospitals (CAHs) and nursing FTEs to bed ratios. HFAP hospitals had the lowest number of admissions M=5,029, beds M=120, and nursing FTEs M=23, the highest proportion of rural referral centers and the second highest proportion of CAHs.

Table 2

Comparison of profiles of hospitals accredited by the four independent agencies

Variable Joint Commission, mean (SD) HFAP, mean (SD) DNV, mean (SD) CIHQ, mean (SD)
Physicians & dentists (FTE) 49.61 (175.92) 22.74 (38.92) 28.63 (63.25) 4.23 (8.85)
Sole community provider 6% (25%) 5% (22%) 9% (28%) 8% (28%)
Rural referral center 10% (31%) 11% (32%) 6% (23%) 8% (28%)
Critical access hospital status 11% (32%) 21% (41%) 23% (42%) 0% (0%)
Member of COTH 10% (29%) 1% (10%) 5% (22%) 0% (0%)
Physical rehabilitation care 33% (47%) 31% (47%) 31% (46%) 38% (51%)
Nurse FTE to bed ratio 0.62 (0.49) 0.64 (0.37) 0.70 (0.57) 0.63 (0.24)
Surgical cases per 1,000 admissions 1.19 (2.27) 1.70 (2.08) 1.49 (2.87) 0.41 (0.34)
Isolation rooms per 10,000 beds 0.02 (0.034) 0.02 (0.025) 0.03 (0.04) 0.03 (0.04)
Number of admissions 10,420 [11,216] 5,029 [58,412] 8,510 [12,798] 7,517 [8,342]
Total hospital beds 224.67 (226.54) 119.72 (112.01) 188.47 (262.23) 137.23 (119.69)
Total surgical operations 8,589 [10,161] 5,053 [4,658] 6,918 [9,183] 3,817 [4,283]
Medical/surgical ICU 86% (34%) 78% (41%) 72% (45%) 85% (38%)
Other intensive care unit 16% (37%) 4% (20%) 10% (29%) 0% (0%)
Number of infection isolation rooms 16.30 (20.49) 7.31 (9.12) 14.72 (26.22) 7.08 (6.36)
Registered nurses (FTE) 481.74 (633.94) 230.93 (257.11) 376.91 (613.94) 234.69 (211.40)

HFAP, healthcare facilities accreditation program; DNV, det norske veritas healthcare; CIHQ, center for improvement in healthcare quality; FTE, full time equivalent; COTH, council of teaching hospitals.

Comparison of accreditation agencies in terms of quality

A series of ANOVA analyses were conducted to determine differences to quality measures under study, across the three hospital accreditation agencies (Joint Commission, DNV, and HFAP). Follow-up post hoc tests with Tukey HSD were conducted to examine paired differences, for the statistically significant ANOVAs. Beginning with the 30-day chronic obstructive pulmonary disease (COPD) mortality rates, the ANOVA was significant at the 0.05 level, F(2, 2377)=4.512, P=0.011. There was found a statistically significant difference of the mean COPD mortality between hospitals accredited by the Joint Commission and DNV, P=0.008. For the 30-day heart failure (HF) mortality rates, the ANOVA was also significant at the 0.05 level, F(2, 2356)=4.904, P=0.007. Post-hoc analysis revealed statistically significant difference of the mean HF mortality between hospitals accredited by the Joint Commission and HFAP, P=0.043. Central line-associated bloodstream infection (CLABSI) rates were also different across the different accreditation agency hospitals [F(2, 1582)=3.186, P=0.042]. No pairwise differences were observed though (Table 3). None of the other 30-day mortality and HAI outcomes (Table 1) were statistically different across the three accrediting agencies.

Table 3

Paired comparisons (Tukey)

(I) (J) (I) − (J) P value 95% CI of difference
30-day COPD mortality Joint commission HFAP −0.023 0.983 −0.327 to 0.282
DNV −0.224 0.008 −0.399 to −0.049
HFAP DNV −0.202 0.347 −0.542 to 0.139
30-day heart failure mortality Joint commission HFAP −0.457 0.043 −0.905 to −0.010
DNV −0.242 0.076 −0.504 to 0.019
HFAP DNV 0.215 0.573 −0.287 to 0.718
CLABSI standard infection ratio Joint commission HFAP 0.186 0.155 −0.050 to 0.423
DNV 0.089 0.166 −0.026 to 0.205
HFAP DNV −0.096 0.652 −0.355 to 0.161

COPD, chronic obstructive pulmonary disease; HFAP, Healthcare Facilities Accreditation Program; DNV, Det Norske Veritas Healthcare; CLABSI, central line-associated bloodstream infection.

Multivariate analysis

Multivariate analyses were conducted using MLR to examine the association between the accreditation agency and the outcomes under study, after controlling for the hospital teaching status, number of hospital admissions, nursing and physician FTEs, nursing FTE to bed ratios, proportion of isolation rooms, critical access status and rural referral center status. Of the three accreditation agencies, the Joint Commission was used as the reference agency. For each outcome (dependent variable) a separate regression model was created. Only outcomes that were found to be significant in the ANOVA analysis were examined. These outcomes are: (I) 30-day COPD mortality, (II) 30-day HF mortality, and (III) CLABSI Standard Infection ratio.

After controlling for the aforementioned hospital characteristics, the DNV-accreditation status was found to be associated with an increase to the 30-day COPD mortality (b=0.225, P<0.01). Of the control variables, the teaching hospital status (b=−0.351, P<0.01) and the rural referral center attribute (b=0.194, P<0.05), were found to be associated with the COPD 30-day mortality rates. On the other hand, none of the accreditation agencies were associated with the 30-day HF mortality. Of the control variables the rural referral center attribute (b=0.434, P<0.01), the number of admissions (b=−0.000, P<0.01) and the CAH status (b=0.268, P<0.05) were found to be associated with an increase to the HF mortality. The teaching hospital status was found to be associated with a decrease to the health failure mortality rates (b=−0.565, P<0.01). While neither of the agencies were found to be associated with the CLABSI infection ratio at the 95% statistical level, both accreditation attributes (HFAP and DNV) were very close to being statistically significantly associated with lower to the CLABSI infection rates (Table 4). Of the control variables, the rural referral center attribute (b=0.094, P<0.05) and the proportion of isolation rooms (b=38.282, P<0.05) were found to be associated with an increased CLABSI infection ratio. Table 4 shows the regression summary including those independent variables that were statistically significantly associated with each of the three outcomes.

Table 4

Multiple linear regression analyses for each of the outcomes under study

B T Sig R2
Dependent variable: 30-day COPD mortality
   (Constant) 8.654 125.553 0.000 0.027
   Accreditation agency: HFAP −0.026 −0.195 0.845
   Accreditation agency: DNV 0.225 2.904 0.004
   Teaching hospital −0.351 −3.116 0.002
   Physicians and dentists (FTE) 0.000 −1.902 0.057
   Registered nurses (FTE) 0.000 −0.399 0.690
   Nurse FTE to bed ratio −0.102 −1.489 0.137
   Rural referral center 0.194 2.445 0.015
   Critical access hospital 0.058 0.677 0.498
   Number of admissions 0.000 −0.330 0.741
   Proportion of isolation rooms −15.136 −0.923 0.356
Dependent variable: 30-day heart failure mortality
   (Constant) 11.838 117.639 0.000 0.083
   Accreditation agency: HFAP 0.239 1.277 0.202
   Accreditation agency: DNV 0.186 1.68 0.093
   Teaching hospital −0.565 −3.589 0.000
   Physicians and dentists (FTE) 0.000 −1.795 0.073
   Registered nurses (FTE) 0.000 −0.134 0.893
   Nurse FTE to bed ratio −0.05 −0.481 0.631
   Rural referral center 0.434 3.883 0.000
   Critical access hospital 0.268 2.125 0.034
   Number of admissions 0.000 −2.985 0.003
   Proportion of isolation rooms 0.501 0.021 0.983
Dependent variable: CLABSI standard infection ratio
   (Constant) 0.555 9.924 0.000 0.015
   Accreditation agency: HFAP −0.188 −1.871 0.062
   Accreditation agency: DNV −0.087 −1.747 0.081
   Teaching hospital 0.047 0.836 0.403
   Physicians and dentists (FTE) 0.000 0.347 0.729
   Registered nurses (FTE) 0.000 −0.027 0.978
   Nurse FTE to bed ratio 0.133 1.879 0.060
   Rural referral center 0.094 2.241 0.025
   Critical access hospital 0.133 0.325 0.745
   Number of admissions 0.000 0.114 0.910
   Proportion of isolation rooms 38.282 2.455 0.014

COPD, chronic obstructive pulmonary disease; HFAP, Healthcare Facilities Accreditation Program; DNV, Det Norske Veritas Healthcare; FTE, full time equivalent; CLABSI, central line-associated bloodstream infection.


Discussion

Summary of findings

Hospital leaders’ decision to choose what agency is best for their hospital is based on a multitude of factors: standards, costs, and the various benefits from the agency. This study sought to evaluate the differences in 30-day mortality rates, and patient safety measures of hospitals accredited by independent agencies. The study found there are commonalities amongst hospitals that utilize the different agencies. Those hospitals accredited by Joint Commission are more likely to be large hospitals, annual admissions M=10,420, total beds M=224, and total surgical operations M=8,589. Joint Commission has the highest proportion of teaching hospitals at 10% of hospitals, a finding that is validated by another research (14). HFAP hospitals were on average the smallest hospitals, with total admissions M=5,029, total beds M=120, and total surgical operations M=5,053. DNV and CIHQ hospital sizes were in between the other agencies. DNV total admissions M=8,510, total beds M=188, and total surgical operations M=6,918. CIHQ total admissions=7,517, total beds M=137, and total surgical operations M=3,817. HFAP and DNV hospitals were also found to have the highest proportion of CAHs; (21% in HFAP and 23% in DNV-accredited hospitals). The observed differences may be explained by a multitude of factors. The total costs associated with accreditation is a main factor to consider when choosing an accreditation agency (10,19). Small rural hospitals have smaller margins compared to larger urban hospitals, which may keep small rural hospitals from utilizing the Joint Commission as their accreditation agency. This study did not seek to understand why hospitals chose a specific accreditation agency but recommends that this topic be further studied.

30-day mortality

The 30-day COPD mortality rates were found to differ across the three accreditation agencies. Specifically, the mean COPD mortality was lower in hospitals accredited by the Joint Commission by 0.22% compared to DNV accredited hospitals. The 30-day HF mortality rates were also different across the three accreditation agencies. Post-hoc analysis revealed that HFAP accredited hospitals have higher HF mortality rates than Joint Commission hospitals, by 0.46%. After controlling for hospital structure characteristics, the DNV-accreditation status was found to be associated with an increase to the 30-day COPD mortality, by 0.23%, with the reference agency in the regression model being the Joint Commission. This association, though needs to be interpreted with caution. This is since it may be present due to inherent population health characteristics in geographic areas where there are more DNV accredited hospitals, and the study did not control for any social determinants of health. Neither of the two accreditation agencies in the model, were associated with a decreased or an increased HF mortality rate, against the reference agency. Drivers of lower 30-day mortality rates within a hospital were found to be the teaching status (0.35% and 0.57% lower 30-day COPD, and HF mortality respectively). On the contrary the rural referral center attribute was found to be associated with increased COPD and HF 30-day mortality rate, by 0.19% and 0.43%, respectively. The Critical Hospital Access status was also associated with increased 30-day HF mortality by 0.27%. Larger hospitals that are affiliated with universities may have lower 30-day mortality rates. Prior research has also shown these factors contribute to lower hospital mortality rates. When evaluating the teaching status of a hospital, along with the size of a facility, other studies found similar results. In 2005, Kupersmith studied the effect of the teaching hospital status in Medicare patient outcomes. Teaching hospitals outperformed non-teaching hospitals not only in 30-day mortality rates but had better overall quality outcomes (20). Carr et al. found that mortality rates were lower in urban, teaching hospitals with a high bed count (21). Burke corroborated prior research that major teaching status hospitals had lower mortality rates than non-teaching hospitals (22). Lastly, Tourangeau in 2007 found that hospitals with a higher percentage of registered nursing staff providing direct care and an adequate number of nurses on staff was associated with lower 30-day mortality rates (23). A hospital’s teaching status, size, and nurse staffing are significant factors associated with the lower 30-day mortality rate compared to smaller hospitals. Non-teaching facilities also had higher 30-day mortality rates which was corroborated by this study as well as prior studies.

Other drivers of higher 30-day mortality rates in hospitals were found to be the hospital’s designation as a rural referral center or a CAH. Such hospitals had higher mortality rates than those without this designation. A CAH is a hospital located in a rural area, has less than 25 inpatient beds, and is located greater than 35 miles from another hospital (24).These hospitals are in rural areas and lack equipment and specialty clinicians to treat gravely ill patients. Joynt conducted a cross sectional study of CAH outcomes and found CAHs did not have the clinical capabilities of larger hospitals, which led to poorer performance in process measures and 30-day mortalities (25). Joynt followed up her study with a subsequent longitudinal study evaluating CAH outcomes from 2002 to 2010. The study reaffirmed Joynt’s earlier research, that by 2010 CAHs had significantly higher 30-day mortality rates than non-CAHs (26).

Patient safety measures

While CLABSI rates were different across the accreditation agency hospitals, no pairwise differences were observed. None of the other 30-day mortality and HAI outcomes were statistically different across the three accrediting agencies. After controlling for hospital structure characteristics, neither of the agencies were found to be associated with the CLABSI infection ratio at the 95% statistical level. The authors believe though that further study is needed to pinpoint to the role of the accreditation itself and the hospital infection performance, since the accreditation attributes were associated with lower the CLABSI infection rates, but only at the statistical significance level of 90%. Of the control variables, the rural referral center attribute and the proportion of isolation rooms were both found to be associated with an increased CLABSI infection ratio. What is known in the literature so far, is that the Joint Commission has a robust toolkit on preventing CLABSIs that improves care processes (27). Processes like those in the CLABSI toolkit support better outcomes. CLABSI rates were higher in rural referral centers. These findings are consistent with Joynt’s research in 2013 that those hospitals located in rural areas have lower quality outcomes than urban hospitals (26). Clinical practices that are standard in accreditation processes are evidence that better outcomes can be realized if the structure is provided, and the processes are monitored.

The study did not find any significance for CAUTIs, MRSA, C diff, or SSIs. The accrediting agencies have implemented similar protocols and improvement strategies, but the results were not significant in this study. There needs to be more research on the efficacy of the protocols endorsed by the different accreditation agencies to improve patient safety measures. If CLABSI protocols are proving effective, what can be learned from these processes that can be applied to these hospital structures and protocols?

Hospital accreditation has been an important part of the CMS CoP since the inception of the Medicare Act of 1965. Future researchers should conduct longitudinal studies of hospital quality performance to assess impacts from accreditation agency initiatives. As patient safety and affordability continue to be on the forefront of healthcare, accreditation agencies will be instrumental in the evaluation of care practices and their effectiveness. Future studies should review multiple years of data that align to specific agency areas of focus to determine if there is a long-term trend in improved hospital performance. Additional research should also include the evaluation of hospitals who change accreditation agencies and the impact on quality and patient safety. The accreditation review processes used by each agency differ and can impact hospital care processes. More research needs to be completed to assess the impact of hospitals changing accreditation agencies.

Limitations and future research avenues

The CIHQ accreditation agency was excluded from the ANOVA analysis and subsequent multivariate analyses due to only 13 hospitals being included in the final population. The study was a cross sectional research that examined one year’s worth of patient safety data, and a three-yearsnapshot of mortality data. These measures are risk adjusted by CMS accounting for patient demographics and morbidities to ensure that individual patient complexity was accounted for in the analysis but are only reported at the hospital level. These risk adjustments do consider other patient demographics such as race, socioeconomic status, literacy rate, or other social determinants of health factors. Acknowledging this limitation, we realize that the results in their entirety cannot be attributed to the accrediting agency, but hospital processes play a major part in outcomes. These social factors play a major role in a patient’s overall health and should be considered in future research. The AHA database information utilized is a voluntary survey completed by hospitals that is supplemented from various third-party data. The study utilized control variables to account for hospital variation including size, volume, type of hospital, and staffing. Once ANOVA analyses were completed, only those variables that were found to be significant had further regression testing. The CLABSI Linear Regression model, had a low model fit. The low model fit means that the variability in the model is not explained by the data even after controlling for numerous variables. Finally, the study found associations that may be attributed to inherent characteristics of hospitals accredited by a specific accreditation agency. In that respect, it is unknown how exactly the accreditation agency selection decisions are associated with outcomes or care, and quality; the study was not designed to examine how structural characteristics (size, staffing, rurality, etc.) are determinants of accreditation decisions.

Health policy implications

Hospital accreditation and meeting the CMS CoP is a foundational piece of the Medicare Act. The findings in this study and other recent studies have found that the accreditation agency utilized by a hospital does not have a significant impact on hospital outcomes or the patient experience. Accreditation agencies are responsible for ensuring that hospitals are meeting CoP standards. Furthermore, agencies are increasing the rigor of their standards. The additional costs and administrative burdens brought on by accreditation has not proven to significantly improve outcomes amongst the various agencies. The Joint Commission has been focusing on innovative ways to increase patient safety outcomes since the early 2000’s and the adaptation of tracer methodology (3). The interventions introduced by Joint Commission and the various agencies have not differentiated themselves from one another. The tracer methodology in conjunction with targeted initiatives like the CLABSI toolkit have driven improvements in hospital quality measures, but do not support a holistic quality management system.

As CMS and political leaders continue to evaluate and implement policies to reduce healthcare spending while improving outcomes, hospital accreditation agencies will need to evaluate their focus and the processes they influence in hospitals. Further, study of the accreditation approach recommended by Griffith in 2018 should be completed. The approach constructs the framework for a hospital quality management system that focuses on quality, finances, risk identification and mitigation, and assessing the needs of the community they serve (9). Accreditation and CMS CoPs should work to create a hospital structure rooted in continuous improvement and risk identification to improve outcomes and better serve patients.

Lastly, CMS and healthcare leaders need to further evaluate the impact of nursing care on hospital quality outcomes. If a hospital has a higher amount of nursing FTEs and improved nurse-to-patient ratios, they have better measure outcome scores compared to hospitals with lower FTEs and care ratios. Prior research has shown that hospitals with a higher percentage of registered nursing staff providing direct care is associated with lower 30-day mortality rates and better patient experience (23,28). Healthcare policy makers should evaluate the need for national standards for nurse-to-patient ratios. If the investment in additional nurses can improve hospital quality across the country, nursing care ratios should be further explored as a policy intervention.

This study identified that having higher nursing FTEs was significant in reducing 30-day mortality rates. Hospital leaders need to be cognizant of the impact that nursing care ratios have on quality outcomes. The study also determined that the accreditation agency did not have any impact on quality and safety measures.

In the current healthcare environment, leaders must not only weigh outcomes but need to consider the costs associated with choosing accreditation agency. These costs include membership and the additional costs aligned with constant survey readiness or additional FTEs to enforce CoP compliance. With no differentiation being shown in patient outcomes, this should encourage hospital leaders to factor in other criteria when choosing an accreditation agency. Cost savings associated with changing accrediting agencies could permit funding reallocation to FTEs or programs known to positively impact patient outcomes.


Conclusions

Hospital accreditation is important to ensure that hospitals meet clinical and administrative standards of patient care. Agency surveyors play an important role in assessing patient care protocols, administrative practices, and assessing the hospital’s physical plant. This study found that the DNV-accreditation status is associated with a small increase to the 30-day COPD mortality after controlling for hospital size, volume, staffing, and other characteristics. Further study is needed to understand whether the accreditation agency status is associated with the hospital infection performance.

As healthcare leaders and the industry look to implement reform that rewards value and outcomes, accreditation agencies play a pivotal role on behalf of CMS. This study sought to determine if there was an accreditation industry leader in evaluating processes to produce better outcomes. The results showed that this is currently not the case in accreditation agencies. Further research needs to continue to evaluate what portions of the accreditation process support better outcomes and lower cost while revising those portions that do not. As the healthcare industry looks to reduce costs and improve healthcare outcomes, accreditation agencies will continue play an important role. CMS and the healthcare industry should evaluate the current CoP and accompanying processes to better align the accreditation process with improving patient outcomes.


Acknowledgments

This work was supported by Dr. Tina Kopka-Lewis Central Michigan University, Dr. Kay Wagner MidMichigan Health, and Dr. Paul Berg MidMichigan Health.

Funding: None.


Footnote

Reporting Checklist: The authors have completed the MDAR reporting checklist. Available at https://jhmhp.amegroups.com/article/view/10.21037/jhmhp-21-24/rc

Peer Review File: Available at https://jhmhp.amegroups.com/article/view/10.21037/jhmhp-21-24/prf

Conflicts of Interest: Both authors have completed the ICMJE uniform disclosure form (available at https://jhmhp.amegroups.com/article/view/10.21037/jhmhp-21-24/coif). Both authors have no conflicts of interest to declare.

Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013).

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. Kohn LT, Corrigan JM, Donaldson MS. Errors in health care: a leading cause of death and injury. In To err is human: Building a safer health system 2000. National Academies Press (US).
  2. Center for Medicare and Medicaid (CMS). Conditions for Coverage (CfCs) & Conditions of Participation (CoPs). Available online: https://www.cms.gov/Regulations-and-Guidance/Legislation/CFCsAndCoPs
  3. Siewert B. The Joint Commission Ever-Readiness: Understanding Tracer Methodology. Curr Probl Diagn Radiol 2018;47:131-5. [Crossref] [PubMed]
  4. Lam MB, Figueroa JF, Feyman Y, et al. Association between patient outcomes and accreditation in US hospitals: observational study. BMJ 2018;363:k4011. [Crossref] [PubMed]
  5. Poku MK, Hellmann DB, Sharfstein JM. Hospital Accreditation and Community Health. Am J Med 2017;130:117-8. [Crossref] [PubMed]
  6. Center for Medicare and Medicaid (CMS). Hospital Value-Based Purchasing. (n.d.). Available online: https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/Hospital-Value-Based-Purchasing-
  7. Fuller RL, McCullough EC, Bao MZ, et al. Estimating the costs of potentially preventable hospital acquired complications. Health Care Financ Rev 2009;30:17-32. [PubMed]
  8. McFadden KL, Stock GN, Gowen CR 3rd. Exploring strategies for reducing hospital errors. J Healthc Manag 2006;51:123-35; discussion 136. [Crossref] [PubMed]
  9. Griffith JR. Is It Time to Abandon Hospital Accreditation? Am J Med Qual 2018;33:30-6. [Crossref] [PubMed]
  10. Brasure M, Stensland J, Wellever A. Quality oversight: why are rural hospitals less likely to be JCAHO accredited? J Rural Health 2000;16:324-36. [Crossref] [PubMed]
  11. Chen J, Rathore SS, Radford MJ, et al. JCAHO accreditation and quality of care for acute myocardial infarction. Health Aff (Millwood) 2003;22:243-54. [Crossref] [PubMed]
  12. Moffett ML, Bohara A. Hospital quality oversight by the Joint Commission on the Accreditation of Healthcare Organizations. East Econ J 2005;31:629-47.
  13. Longo DR, Hewett JE, Ge B, et al. Hospital patient safety: characteristics of best-performing hospitals. J Healthc Manag 2007;52:188-204; discussion 204-5. [Crossref] [PubMed]
  14. Schmaltz SP, Williams SC, Chassin MR, et al. Hospital performance trends on national quality measures and the association with Joint Commission accreditation. J Hosp Med 2011;6:454-61. [Crossref] [PubMed]
  15. Mumford V, Greenfield D, Hogden A, et al. Counting the costs of accreditation in acute care: an activity-based costing approach. BMJ Open 2015;5:e008850. [Crossref] [PubMed]
  16. Hospital Compare, 30-day death (mortality) rates. Available online: https://www.medicare.gov/hospitalcompare/Data/Overview.html
  17. Hospital Compare, Infections. Available online: https://www.medicare.gov/hospitalcompare/Data/Overview.html
  18. American Hospital Association (AHA) Why AHA Data: AHA Data. Available online: https://www.ahadata.com/why-aha-data
  19. Fennel VM. (n.d.). Accreditation options: Selecting an accrediting source. Selecting an accrediting source. (Cited June 23, 2020). Available online: https://www.beckershospitalreview.com/quality/accreditation-options-selecting-an-accrediting-source.html
  20. Kupersmith J. Quality of care in teaching hospitals: a literature review. Acad Med 2005;80:458-66. [Crossref] [PubMed]
  21. Carr BG, Goyal M, Band RA, et al. A national analysis of the relationship between hospital factors and post-cardiac arrest mortality. Intensive Care Med 2009;35:505-11. [Crossref] [PubMed]
  22. Burke LG, Frakt AB, Khullar D, et al. Association Between Teaching Status and Mortality in US Hospitals. JAMA 2017;317:2105-13. [Crossref] [PubMed]
  23. Tourangeau AE, Doran DM, McGillis Hall L, et al. Impact of hospital nursing care on 30-day mortality for acute medical patients. J Adv Nurs 2007;57:32-44. [Crossref] [PubMed]
  24. Rural Health Information Hub. CAHs (CAHs). Available online: https://www.ruralhealthinfo.org/topics/critical-access-hospitals
  25. Joynt KE, Harris Y, Orav EJ, et al. Quality of care and patient outcomes in critical access rural hospitals. JAMA 2011;306:45-52. [Crossref] [PubMed]
  26. Joynt KE, Orav EJ, Jha AK. Mortality rates for Medicare beneficiaries admitted to critical access and non-critical access hospitals, 2002-2010. JAMA 2013;309:1379-87. [Crossref] [PubMed]
  27. Central Line Associated Bloodstream Infections Toolkit and Monograph. Available online: https://www.jointcommission.org/resources/patient-safety-topics/infection-prevention-and-control/central-line-associated-bloodstream-infections-toolkit-and-monograph/
  28. Kutney-Lee A, McHugh MD, Sloane DM, et al. Nursing: a key to patient satisfaction. Health Aff (Millwood) 2009;28:w669-77. [Crossref] [PubMed]
doi: 10.21037/jhmhp-21-24
Cite this article as: Kato M, Zikos D. Association between hospital accrediting agencies and hospital outcomes of care in the United States. J Hosp Manag Health Policy 2022;6:12.

Download Citation