Can artificial intelligence help improve the quality of healthcare?
Editorial Commentary

Can artificial intelligence help improve the quality of healthcare?

Grant Phelps^, Paul Cooper

Deakin University Medical School, Victoria, Australia

^ORCID: 0000-0002-1584-0104.

Correspondence to: Associate Professor Grant Phelps. School of Medicine, Deakin University, 75 Pigdons Road, Waurn Ponds, Victoria 3216, Australia. Email: g.phelps@deakin.edu.au.

Received: 17 August 2020; Accepted: 04 September 2020; Published: 25 December 2020.

doi: 10.21037/jhmhp-20-115


The provision of safe and effective healthcare remains a profoundly important attribute of a society’s commitment to health for all (1). All Western societies have grappled with concerns about the quality of healthcare in recent decades, for care is neither as safe nor as effective as it can be (2).

The increasing uptake of forms of AI/machine learning (henceforth called AI) into the healthcare environment is a potentially welcome development, offering the potential for more systematised learning with the promise of better care. But what is the evidence that AI contributes to improving the health of our communities, to making care safer, and to delivering it in a more cost effective way? And critically, does it help us to better provide for what really matters to healthcare consumers, through their authentic engagement in the design, delivery, monitoring and evaluation of care? These are fundamentally important considerations if we are to make inroads into improving care quality.

The problem of poor care achieved prominence with the release of the Institute of Medicine’s “Crossing the Quality Chasm” in 2001 (3) but despite significant efforts, little progress has been made in profoundly increasing the quality of care since that time. A seminal 2016 paper suggesting that medical error was the third leading cause of death in the US (4) was controversial, but the authors’ conclusion that medical error as a cause of death requires ‘greater attention’ was unchallenged.

Concerns about quality of care remain, despite significant efforts in most Western countries to improve care through better training and systems knowledge (5), linking quality of care to medical professionalism (6), better provision of healthcare related data (7), and an increasing focus on system level impacts on safety and quality (8). Disappointingly, the sense of many at the front line of care delivery is that these system level changes have had a limited impact on care delivery (9).

More recently, there has been an increasing recognition that compliance based approaches to assuring safety and quality have not had the hoped for impact, with many health jurisdictions now moving towards a continuous improvement regulatory philosophy, perhaps best exemplified by the Australian Commission on Safety and Quality in Healthcare’s approach to embedding continuous improvement within the (Australian) National Safety and Quality Health Service Standards (10).

In most Western healthcare systems, the focus of safety and quality efforts has largely been on the hospital setting, with most of the evidence for patient harm and lack of reliability in healthcare coming from this setting. However, primary care remains in most healthcare systems the first point of contact for people accessing health services (11), so it is problematic that there is little evidence (by comparison with hospital based care) of continuous improvement approaches to quality being embedded in that setting. What evidence there is, points to a focus on patient safety, rather than a broader view of quality (12).

Underpinning all efforts to improve quality of care is a recognition that care is fundamentally a ‘social care contract’ that reflects an ethical principle about maximising benefit whilst minimising harm, and that the contract must be preserved through a range of short and long-term transactions between clinicians and consumers. Improving care requires a deep understanding of these activities, and an ability to profoundly influence factors that impact the process of care. Care is thus at its core, a system of processes that can be measured and modified to improve care delivery and care outcomes.

From a socio-technical perspective, quality of care has historically been defined according to the perspective of the ‘actor’ in the care scenario (e.g., Funders may equate quality with health outcome for funds spent, while clinicians may define quality in technical terms). If we take the ‘social care contract’ perspective, then we see value in understanding quality of care from the perspective of the consumer (13). The Institute for Healthcare Improvement’s ‘Triple Aim’ approach (14) provides a way of thinking about quality which moves us from a series of measurable attributes (e.g., safe, timely, equitable, efficient, effective) to one which has a deeper meaning being based in benefit to individual and society. The Triple Aim approach frames quality as care which improves health outcomes (individual and societal), improves the cost efficiency of care (reflecting the stewardship role that we all have for healthcare) and improves the experience of care (at its core ensuring that care has meaning to the individual and the community).

Framing quality in this way allows us to move beyond thinking about quality in professional terms (e.g., training better clinicians) or structural terms (e.g., building better organisations) and moves it into a systems way of thinking that aims to achieve the social contract of quality of care.

However, our current systems way of thinking about healthcare quality is often framed around the business needs of our organisations (e.g., demand and activity management) rather than including considerations of patient-centred care, potentially leading to management decisions in support of gaming the measurement rather than adopting genuine improvement (15). These are important considerations if our healthcare organisations, including small practices, are to remain viable, but they are somewhat removed from the impact of our care systems on individual and groups of patients/consumers.

The shift to more formalised organisation level clinical governance in the 1990’s (16) has required that organisations and clinicians think about quality in a way that moves beyond the organisational level and focuses on the impact of care on consumers. Clinicians have for many years recognised the importance of ‘quality assurance’ activities—ensuring that care was safe and met professional standards (17)—but this focus has often failed to address the patient experience of care (18). This profoundly important shift to focusing on patient centred care allows a different and more nuanced approach to understanding quality at a macro system, organisational, and clinical service level.

Our current understanding of AI suggests that it can potentially assist in improving safety and quality of healthcare through maximising the effectiveness of some current safety and quality tools/approaches, but its impact in other areas is currently unclear. Further, in order for AI to successfully support improvements in care, it must earn the trust of healthcare consumers and their clinicians.

Table 1 summarises the authors’ perspectives on current potential for AI to assist in safety and quality aspects of healthcare.

Table 1

Potential impact of AI on aspects of current safety and quality work

Potential impact of AI on safety and quality Potential for AI to assist
Clearly understanding the roles and responsibilities for clinical governance AI may introduce new failure modes and governance lapses (such as over-reliance on the AI and insufficient human oversight, or an inability to interrogate the decision-making reasoning of ‘black box’ AI systems)
Ensuring that incident management systems provide adequate surveillance to recognise major safety lapses AI systems have potential to improve monitoring of incidents and advise of out of bounds conditions in real-time. Less clear is how robust incident management can be against gaming or influences of poor data quality. Further, AI solutions will not by themselves address under-reporting
Implementing corrective action in response to identified patient safety risks and lapses Unclear—helpfulness of AI will largely depend on an effective clinical governance framework
Improving diagnostic efficiency and effectiveness Considerable—AI agents are already proving useful especially when used in conjunction with clinicians
Establishing a complaint management system that includes consumer partnership Unclear—helpfulness of AI will largely depend on an effective governance framework
Ensuring a robust and positive safety culture This requires leadership—the potential for AI to assist is unclear
Building leadership capacity for improvement This is essentially a set of human behaviours and potential for AI to assist is unclear
Ensuring effective organizational risk management Unclear—helpfulness of AI will largely depend on an effective governance framework
Supporting improvement focussed data provision to clinicians and consumers AI systems can potentially screen noisy data and may assist through more targeted information provision. Effectiveness will depend on training of the models and avoiding biases or other flaws. Will require effective clinical governance

A consideration of the potential for AI to assist in some areas of safety and quality work (see table) reveals in the authors’ view currently limited opportunity for AI to directly assist clinicians and consumers to improve care, although we expect this situation to improve.

Why do we believe there is currently somewhat limited scope for AI to assist in improving safety and quality of care? In part this reflects that the immediate changes required are largely human—organisational and behaviour (cultural) change, both recognised as being fundamental improving care (19). While some AI approaches (such as ‘nudging’) can affect human behaviour, their usage in healthcare requires careful consideration of the outcomes desired as a system, and potential for unintended consequences—all within an ethical framework with a particular emphasis on privacy.

As we consider quality in the context of the use of AI becoming more significant, we must also consider whether we have in place effective governance and in particular, clinical governance approaches which can assure our communities of benefit from the introduction of AI into healthcare. We therefore continue to advocate for a governance framework that is suitable for governing AI applications within the context of a human/machine socio-technical system (20).

If AI is to have a significant place in supporting care, it must be integrated into routine practice, and into clinical governance approaches. Parallels can be drawn with the now routine use of (organisational) administrative data sets to inform decision making in relation to clinical practice—the limitations of this data set and the assumptions and adjustments underpinning it must be accounted for and managed (21).

The careful implementation of AI under effective governance models will help lead to safer, more effective care, if it can be operationalised and integrated into existing clinical governance models, so that consumers, clinicians, managers and organisational leaders can have confidence that AI is truly supporting improvement and importantly, to ensure that its deployment does not create unintended negative consequences. Reddy et al. have recently suggested that this could occur by healthcare organisations adopting “a clinical governance committee formulated with specific skills and experience to oversee the introduction and deployment of AI models in clinical care” (20).

Whilst this might be feasible in the very largest hospitals, the majority of healthcare is delivered and received (in most jurisdiction’s healthcare systems) in small organisational settings such as practices and clinics, with often limited management oversight and with few resources to support a sophisticated approach to the uptake of new initiatives. This creates particular challenges for the uptake of AI as part of routine clinical quality work in some settings, particularly in the absence of agreed, industry wide standards for AI.

It is also important to consider how AI approaches may have the potential to help improve quality from the consumer perspective through assisting in enabling the entire patient journey (e.g., advising of appointments, warning of missed medications, checking for conflicting advice, assisting with integration of documentation).

At this stage of AI deployment there is real hope that AI can positively impact on both clinical performance and clinical effectiveness, however we caution that organisations and patients/consumers will need to be careful to ensure that any decisions to implement are grounded in the process of care, are evidence based and supported by a governance system which is alert to the potential for unintended consequences.

The range of articles in this edition provide hope for the future for real gains in clinical safety and quality from the advent of AI into routine healthcare practice through the meaningful, authentic engagement of healthcare consumers in the design, delivery, monitoring and evaluation of properly governed healthcare.


Acknowledgments

Funding: None.


Footnote

Provenance and Peer Review: This article was commissioned by the Guest Editors (Sandeep Reddy, Jenifer Sunrise Winter, and Sandosh Padmanabhan) for the series “AI in Healthcare-Opportunities and Challenges” published in Journal of Hospital Management and Health Policy. The article did not undergo external peer review.

Conflicts of Interest: Both authors have completed the ICMJE uniform disclosure form (available at http://dx.doi.org/10.21037/jhmhp-20-115). The series “AI in Healthcare-Opportunities and Challenges” was commissioned by the editorial office without any funding or sponsorship. The authors have no other conflicts of interest to declare.

Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. World Health Organization. The world health report 2000. Health systems: improving performance. Geneva: 2000.
  2. Braithwaite J, Matsuyama Y, Johnson J. editors. Healthcare reform, quality and safety: perspectives, participants, partnerships and prospects in 30 countries. Surrey: Ashgate Publishing, 2017.
  3. Institute of Medicine (US) Committee on Quality of Health Care in America Washington (DC)Crossing the Quality Chasm: A New Health System for the 21st Century: National Academies Press (US); 2001.
  4. Makary MA, Daniel M. Medical error—the third leading cause of death in the US. BMJ 2016;353:i2139. [Crossref] [PubMed]
  5. Scott I, Phelps G, Dalton S. Arise the systems physician. Internal Medicine Journal 2014;44:1251-6. [Crossref] [PubMed]
  6. Phelps G, Dalton S. Demonstrable professionalism: linking patient-centred care and revalidation. Internal Medicine Journal 2013;43:1254-6. [Crossref] [PubMed]
  7. Clarke GM, Conti S, Wolters AT, et al. Evaluating the impact of healthcare interventions using routine data. BMJ 2019;365:l2239. [Crossref] [PubMed]
  8. Duckett S, Cuddihy M, Newnham H. Targeting zero: Supporting the Victorian hospital system to eliminate avoidable harm and strengthen quality of care. Available online: www2.health.vic.gov.au/hospitals-and-health-services/quality-safety-service/hospital-safety-and-quality-review (accessed 27 July 2020).
  9. Phelps G, Barach P. Why has the safety and quality movement been slow to improve care? International Journal of Clinical Practice 2014;68:932-5. [Crossref] [PubMed]
  10. Implementation of the NSQHS Standards. Available online: https://www.safetyandquality.gov.au/standards/national-safety-and-quality-health-service-nsqhs-standards/implementation-nsqhs-standards (accessed 27 July 2020).
  11. Hall JJ, Taylor R. Health for all beyond 2000: the demise of the Alma-Ata Declaration and primary health care in developing countries. Med J Aust 2003;178:17-20. [Crossref] [PubMed]
  12. Makeham M, Pont L, Prgomet M, et al. Patient safety in primary healthcare: an Evidence Check review brokered by the Sax Institute (Available online: www.saxinstitute.org.au) for the Australian Commission on Safety and Quality in Health Care, 2015.
  13. Understanding and measuring quality of care: dealing with complexity. Available online: https://www.who.int/bulletin/volumes/95/5/16-179309/en/ (Accessed 30 July 2020).
  14. Berwick DM, Nolan T, Whittington J. The Triple Aim: Care, Health, And Cost. Health Affairs 2008;27:759-69. [Crossref] [PubMed]
  15. Tenbensel T, Jones P, Chalmers LM, et al. Gaming New Zealand's Emergency Department Target: How and Why Did It Vary Over Time and Between Organisations? Int J Health Policy Manag 2020;9:152-62. [Crossref] [PubMed]
  16. Flynn R. Clinical governance and governmentality. Health, Risk and Society 2002;4:155-73. [Crossref]
  17. Donabedian A. The effectiveness of quality assurance. Int J Qual Health Care 1996;8:401-7. [Crossref] [PubMed]
  18. Stebbing JF. Quality assurance of endoscopy units. Best Pract Res Clin Gastroenterol 2011;25:361-70. [Crossref] [PubMed]
  19. Sujan M, Furniss D, Anderson J, et al. Resilient Health Care as the basis for teaching patient safety – A Safety-II critique of the World Health Organisation patient safety curriculum. Safety Science 2019;118:15-21. [Crossref]
  20. Reddy S, Allan S, Coghlan S, et al. A governance model for the application of AI in health care. J Am Med Inform Assoc 2020;27:491-7. [Crossref] [PubMed]
  21. Scott IA, Brand CA, Phelps GE, et al. Using hospital standardised mortality ratios to assess quality of care--proceed with extreme caution. Med J Aust 2011;194:645-8. [Crossref] [PubMed]
doi: 10.21037/jhmhp-20-115
Cite this article as: Phelps G, Cooper P. Can artificial intelligence help improve the quality of healthcare? J Hosp Manag Health Policy 2020;4:29.

Download Citation