Enhancing Healthcare Quality: A Framework for AI Alignment with the Quintuple Aim
In the ever-evolving landscape of healthcare, the integration of artificial intelligence (AI) has emerged as a transformative force, promising to revolutionize patient care, streamline processes, and drive efficiency. However, as we integrate the use of AI into the daily operations and care of community health centers, we must ensure it is meeting healthcare’s larger quality goals.
AI in Healthcare: A Dual Role as Tool and Agent
In most of the conversations I hear about the use of AI in healthcare, we primarily talk about AI as a “tool.” This implies it is an instrument to be used with the complete and total control of its user and dependent on oversight, clinical skill, and knowledge of the clinician. Though AI (and some AIs in particular) undoubtedly possesses traits of a “tool,” it is critical to acknowledge those traits that resemble that of an “agent.” Viewing AI as a “tool” and ignoring its characteristics as an “agent” neglects the concerns, risks, (and benefits) that come with employing any other employee or clinician in the health center. Let’s discuss the various characteristics of AI technology both as a “tool” and an “agent”.
AI as a Tool
Viewing AI as a tool refers to its function as a technology-driven resource that assists healthcare professionals in various tasks, leveraging data analysis and automation to enhance decision-making and streamline processes.
Characteristics:
Decision Support: AI tools provide clinicians with data-driven insights and recommendations to aid in clinical decision-making, diagnosis, and treatment planning.
Automation: By automating repetitive tasks and workflows, AI tools optimize operational efficiency, reduce manual errors, and improve overall productivity.
Data Analysis: AI tools can process vast amounts of healthcare data rapidly, identifying patterns, trends, and anomalies that may not be readily apparent to human clinicians.
In the ever-evolving landscape of healthcare, the integration of artificial intelligence (AI) as a tool requires a robust framework for evaluation to ensure patient safety. Just as we meticulously assess traditional medical equipment and devices, AI technologies must undergo thorough scrutiny to mitigate risks and enhance patient care.
Policies and Procedures: Health centers should establish clear policies and procedures governing the use of AI technologies. These guidelines must ensure that AI systems are deployed and operated in a manner that aligns with best practices and regulatory requirements, similar to how we adhere to protocols for traditional medical devices.
Manufacturer's Instructions: Just as we rely on manufacturers' instructions for medical equipment, AI systems should come with clear guidelines for deployment, operation, and maintenance. Adhering to these instructions is crucial to ensure the safe and effective use of AI tools in healthcare settings.
Quality Controls: Quality control measures are essential in evaluating the performance and reliability of AI systems. Health centers must ensure they implement ongoing monitoring and maintenance to track key metrics such as accuracy, fairness, and safety. This continuous evaluation mirrors the quality control processes applied to traditional medical devices but will be unique to each AI application, depending on its function and intended utility.
Staff Training: Just as healthcare professionals receive training on the use of medical equipment, staff working with AI tools should undergo comprehensive training programs. Clinicians must be educated and empowered clinicians to evaluate AI outputs based on their expertise as they oversee and collaborate with AI tools.
AI as an Agent
Viewing AI as an agent refers to its role as an autonomous entity that can perform tasks, make decisions, and interact with healthcare systems and patients, often in a proactive and predictive manner.
Characteristics:
Autonomy: AI agents can operate independently, making decisions and taking actions based on predefined algorithms and learning from real-time data inputs.
Predictive Capabilities: AI agents can forecast outcomes, trends, and potential risks, enabling proactive interventions and resource allocation to optimize patient care.
Continuous Monitoring: AI agents can monitor patient data in real-time, alerting healthcare providers to critical changes or anomalies that require immediate attention.
When considering the integration of AI into healthcare settings, it is essential to approach AI as an unlicensed clinical staff member (“agent”) working under the supervision of licensed clinical staff. This perspective is particularly relevant when addressing the process of "credentialing and privileging" for an AI agent within a healthcare organization.
Credentialing and privileging are critical processes that traditionally apply to human healthcare professionals to ensure they meet specific standards of competence and are granted the authority to perform certain clinical activities. Extending this framework to AI agents involves establishing guidelines and protocols to assess the capabilities, performance, and reliability of AI systems in delivering healthcare services.
Credentialing for AI Agents: Just as all human clinical staff members are required by HRSA to undergo credentialing to verify their identification (government-issued picture identification), education, and training, AI agents should also be subject to a similar evaluation process. This may involve assessing the AI system's accuracy, reliability, and adherence to clinical guidelines through rigorous testing and validation procedures.
Privileging of AI Agents: Privileging for human clinical staff members refers to granting specific clinical privileges to healthcare professionals based on their fitness for duty and clinical competence. Specific privileges must be clearly defined for each clinical staff member based on their capabilities. Similarly, AI agents should be granted privileges based on their demonstrated capabilities and performance in specific clinical tasks. These privileges should be aligned with the scope of practice defined for the AI system within the healthcare organization.
Supervision and Oversight: Licensed clinical staff members play a crucial role in supervising and overseeing the activities of AI agents. They are responsible for monitoring the performance of AI systems, intervening when necessary, and ensuring that the AI agent operates within the defined scope of practice and clinical guidelines.
Continuous Evaluation: Credentialing and privileging of AI agents should not be viewed as one-time processes. Continuous evaluation and monitoring of the AI system's performance, outcomes, and adherence to best practices are essential to maintain quality and safety in healthcare delivery. This is analogous to the assessments of clinician care (peer review) that all health centers are required by HRSA to complete at least quarterly.
By treating AI as an unlicensed clinical staff member and applying the principles of credentialing and privileging to AI agents, healthcare organizations can establish a framework for ensuring the competence, reliability, and accountability of AI systems in clinical practice. This approach promotes a culture of quality assurance, patient safety, and effective collaboration between AI technology and human healthcare professionals.
AI and the Quintuple Aim
The Quintuple Aim is a framework for conceptualizing what outcomes we should all be aiming for in healthcare. The five aims include improving population health, enhancing the care experience, reducing costs, addressing clinician burnout, and advancing health equity.
Leveraging AI to Achieve the Quintuple Aim
As healthcare organizations and clinicians begin to utilize AI more and more, the Quintuple Aim must be kept at the forefront of our minds to ensure it is being used to support this larger goal. Let’s talk through each “aim” and how AI might be used in support of each one.
1. Improve Patient Experience
Personalized Care: AI algorithms can analyze patient data to tailor treatment plans and interventions based on individual needs and preferences, enhancing the overall patient experience.
Enhanced Communication: AI-powered chatbots and virtual assistants can provide timely responses to patient inquiries, improving communication and engagement.
Threats to Patient Experience: AI does remove the less-tangible human element and these effects may not be fully understood at present.
2. Enhance Population Health
Predictive Analytics: AI can analyze population health data to identify at-risk groups, predict disease outbreaks, and recommend targeted interventions to improve overall community health.
Risk Stratification: AI algorithms can stratify patient populations based on risk factors, enabling healthcare providers to proactively address health disparities and prevent adverse health outcomes.
Threats to Improving Population Health: Leaning too heavily on AI (especially in its early stages) may falsely deliver assumptions or commentary that, given the large volume of data, is virtually impossible to verify.
3. Reduce Costs
Resource Optimization: AI-driven predictive modeling can optimize resource allocation, reduce unnecessary tests or procedures, and streamline operational workflows, leading to cost savings for healthcare organizations.
Fraud Detection: AI algorithms can detect anomalies in billing data, identify potential fraud or abuse, and help prevent financial losses within the healthcare system.
Threats to Using AI to Reduce Costs: In the effort to cut costs, an emotionless AI tool may suggest or implement draconian measures to marginalized populations if not properly balanced with other more humanitarian considerations.
4. Enhance Provider Well-being
Clinical Decision Support: AI-powered decision support tools can assist healthcare providers in making evidence-based decisions, reducing cognitive burden, and preventing burnout.
Workflow Automation: AI can automate administrative tasks, streamline documentation processes, and free up time for providers to focus on patient care, improving overall job satisfaction.
Threats to Staff Well-Being: Many clinicians go into healthcare because they love social interactions and the connection they form with their patients. Integrating AI into more patient interactions will naturally move clinicians further away from the bedside, even if it is only incremental changes. For many clinicians, this human, low-tech interaction is where they “recharge” and feel they are truly making a difference in people’s lives. Caution is needed when balancing AI’s use to improve efficiency with the rewarding human connection that often connects clinicians to meaning and purpose in their work.
5. Achieve Health Equity
Bias Mitigation: AI technologies can be designed to mitigate biases in healthcare delivery, ensuring equitable treatment for all patient populations.
Accessibility: AI-driven telehealth solutions and remote monitoring tools can improve access to care for underserved communities, bridging the gap in healthcare disparities.
AI’s Threat to Health Equity: AI is “educated” by learning from what it is “fed.” There is great potential for AI to expand upon and amplify biases and stereotypes if it is “fed” biased information. Leaders must constantly be evaluating the data it is “feeding” its AI tools and agents to ensure its AI technologies are learning from balanced and factual information.
Implementing AI While Ensuring Healthcare Quality
When implementing AI, especially in its early stages, it’s important to ensure health centers are constantly assessing its use to ensure its quality performance is maintained. The Institute of Medicine (IOM) has established six domains that define healthcare quality. These domains include safety, effectiveness, patient-centeredness, timeliness, efficiency, and equity.
Like any other “agents” healthcare organizations might employ or “tools” they might provide to those employees, leaders need to consider how AI might be used to ensure successful integration into primary care operations. Let’s consider how employing AI as “tools” or “agents” of the organization may be used in a way that is safe, effective, patient-centered, timely, efficient, and equitable.
The AI-Quality Implementation Framework
The Artificial Intelligence-Quality Implementation Framework (AI-QIF) developed by Nilsen, et al, is designed to guide decisions and activities related to the implementation of AI-based applications in healthcare. Designed to address the lack of knowledge in implementing AI in healthcare settings, the AI-QIF aims to facilitate the adoption of AI technologies while ensuring alignment with the Quintuple Aim. The framework focuses on outlining what needs to be done and considered during the implementation process, rather than providing specific solutions or actions. This allows for flexibility and adaptation to the unique characteristics of different AI-based applications.
By incorporating these key elements, the AI-QIF serves as a valuable tool for healthcare organizations looking to navigate the complexities of implementing AI technologies effectively and ethically in their practices.
Medical Malpractice Liability Considerations
When integrating AI into healthcare settings, it is crucial to consider the implications of medical malpractice liability. For most community health centers (Federally Qualified Health Centers), this coverage is through the Federal Tort Claims Act (FTCA). Viewing AI as both a tool and an unlicensed agent working under the supervision of licensed clinical staff members is a prudent approach. Just like any other medical tool technology or employee, AI should be subject to clear contractual agreements that delineate liability responsibilities between the owners of the AI tool or agent and the healthcare clinicians employed by the organization.
Contracts play a pivotal role in defining the boundaries of liability in the context of AI implementation. These agreements should explicitly outline the roles and responsibilities of both the AI system and the human clinicians. Key points to consider in these contracts include:
Ownership of Liability: Contracts should specify who assumes liability for the decisions and actions taken by the AI system. Clear delineation of responsibility between the AI tool owners and the healthcare clinicians helps mitigate potential legal disputes in case of adverse events.
Scope of Practice: Defining the scope of practice for the AI application within the healthcare setting is essential. This includes outlining the specific tasks, decisions, or recommendations that the AI system is authorized to perform under the supervision of licensed clinicians.
Compliance with Regulations: Contracts should ensure that the AI tool complies with all relevant healthcare regulations and standards. This includes data privacy laws, ethical guidelines, and industry-specific regulations governing the use of AI in healthcare.
Training and Oversight: Establishing protocols for training healthcare staff on the use of AI tools and providing ongoing oversight is critical. Clinicians should be equipped to understand the capabilities and limitations of the AI system to ensure safe and effective integration into clinical practice.
Advancing Healthcare Quality with AI
By following the AI-Quality Implementation Framework, healthcare organizations can leverage AI technologies to enhance care quality across the six domains of healthcare quality. From improving patient safety and effectiveness to promoting equity and efficiency, AI has the potential to drive significant advancements in healthcare delivery. However, we are entering unchartered waters and healthcare leaders must think through the AI’s traits as both tools and agents in order to maintain quality and reduce risk.
Disclaimer: The information provided in this blog article is intended for general informational purposes only. It should not be construed as medical or legal advice. Readers are encouraged to consult with qualified healthcare professionals, legal advisors, or relevant authorities for specific guidance on medical practices, legal matters, or any other related issues. The content presented in this article is not a substitute for professional advice tailored to individual circumstances. The author and publisher of this article do not assume any liability for the accuracy, completeness, or applicability of the information provided. Readers are advised to exercise their judgment and discretion when applying the concepts discussed in this article to their specific situations.
Resources
Artificial Intelligence and Patient Safety: Promise and Challenges | PSNet. (n.d.). Retrieved October 12, 2024, from https://psnet.ahrq.gov/perspective/artificial-intelligence-and-patient-safety-promise-and-challenges
Kale, A. U., Hogg, H. D. J., Pearson, R., Glocker, B., Golder, S., Coombe, A., Waring, J., Liu, X., Moore, D. J., & Denniston, A. K. (2024). Detecting Algorithmic Errors and Patient Harms for AI-Enabled Medical Devices in Randomized Controlled Trials: Protocol for a Systematic Review. JMIR Research Protocols, 13, e51614. https://doi.org/10.2196/51614
Nilsen P, Svedberg P, Neher M, Nair M, Larsson I, Petersson L, Nygren J. A Framework to Guide Implementation of AI in Health Care: Protocol for a Cocreation Research Project. JMIR Res Protoc. 2023 Nov 8;12:e50216. doi: 10.2196/50216. https://pmc.ncbi.nlm.nih.gov/articles/PMC10666006/
Priorities for an AI in health care strategy - The Health Foundation. (n.d.). Retrieved October 12, 2024, from https://www.health.org.uk/publications/long-reads/priorities-for-an-ai-in-health-care-strategy
Quintuple Aim: What Is It, and How Can Technology Help Achieve It? (n.d.). Retrieved October 12, 2024, from https://healthtechmagazine.net/article/2023/07/quintuple-aim-perfcon
Ratwani, R. M., Adams, K. T., Kim, T. C., Busog, D.-N. C., Howe, J. L., Jones, R., & Krevat, S. (2023). Assessing Equipment, Supplies, and Devices for Patient Safety Issues. PATIENT SAFETY, 5(1), 15–25. https://doi.org/10.33940/data/2023.3.2
Six Domains of Healthcare Quality | Agency for Healthcare Research and Quality. (n.d.). Retrieved October 12, 2024, from https://www.ahrq.gov/talkingquality/measures/six-domains.html
The Quintuple Aim for Health Care Improvement: A New Imperative to Advance Health Equity | Institute for Healthcare Improvement. (n.d.). Retrieved October 12, 2024, from https://www.ihi.org/resources/publications/quintuple-aim-health-care-improvement-new-imperative-advance-health-equity
TRUSTED: A Framework for Safe & Effective AI Governance - AVIA. (n.d.). Retrieved October 12, 2024, from https://aviahealth.com/insights/trusted-a-framework-for-safe-effective-ai-governance/
Validation framework for the use of AI in healthcare: overview of the new British standard BS30440 - PMC. (n.d.). Retrieved October 12, 2024, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10410839/
Subscribe to the RegLantern Blog
Get the latest posts delivered right to your inbox
RegLantern provides HRSA compliance services (including mock site surveys) and online tools to assist your health center with continual compliance.