Artificial Intelligence (AI) is rapidly transforming healthcare, offering revolutionary solutions for diagnostics, patient care, and administrative efficiency. However, as AI integrates deeper into systems, healthcare compliance professionals face unprecedented challenges. The Health Care Compliance Association (HCCA) Compliance Institute recently shed light on navigating the AI maze from a compliance perspective.
Here are 9 key insights from those discussions:
1. Recognize Invisible AI Issues Early
“You may already have an AI problem.” Often, AI tools are implemented with little notice to compliance departments, potentially leading to unaddressed regulatory issues. The consensus? Compliance needs a seat at the table from the start, demanding a proactive rather than reactive stance on AI.
2. Augment Existing Healthcare Compliance Frameworks
Instead of creating new protocols from the ground up, experts advised leveraging and augmenting existing processes. The FDA’s regulation of diagnostic AI through its Software as a Medical Device (SaMD) framework was given as one example. Understanding and incorporating these established pathways can provide a structured approach to AI integration.
3. Demand Clarity in AI Contracts and Agreements
Ambiguities in AI contracts, particularly in terms of use and partnership agreements, were a significant concern for experts. Healthcare organizations must demand clarity and specificity to ensure compliance and protect patient data. The terms should reflect a clear understanding of the AI tool’s functionality and the data it will handle, emphasizing user transparency and accountability.
4. Differentiate Between Generative AI and Medical AI
The diligence for generative AI, which includes a variety of applications, differs significantly from that for medical AI, which is typically designed for single-purpose algorithms and is heavily regulated by the FDA. Organizations must understand this distinction to apply the correct level of scrutiny and regulatory compliance.
5. Be Wary of Data Use and Secondary Implications
Experts highlighted the critical need to understand the data deals underlying AI implementations. “De-identified” data assurances should be scrutinized, and the secondary use of data must be well-governed. The panel underscored the importance of understanding the value of historical data and the potential implications of its use in AI training.
6. Follow AI Compliance Beyond Healthcare
It’s essential to be aware of AI enforcement actions across different industries and their implications for healthcare. For instance, the SEC’s charges against investment advisors over misleading statements about AI signal a broader regulatory interest in AI claims and transparency. Healthcare organizations need to be vigilant about how they represent their use of AI to avoid similar pitfalls.
7. Monitor AI-Related Regulatory Exclusions and Actions
Regularly screening AI companies against exclusion lists such as those maintained by the OIG LEIE, other federal lists, and state lists is crucial. The due diligence process should also include monitoring AI tools for potential enforcement actions, ensuring that compliance checks are an integral part of the pre-contracting phase and continue regularly thereafter.
8. Conduct Thorough AI Risk Assessments
Organizations should conduct detailed AI risk assessments to identify potential compliance issues across all areas, including privacy, security, and false claims. The assessment should cover various organizational facets, from finance and accounting to human resources. Identifying and mitigating risks associated with the use of AI in patient billing/coding, health information management, and other operational areas is crucial to maintaining compliance and ensuring the responsible deployment of AI technologies.
9. Enhance Compliance Training to Include AI-Related Scenarios
In the ever-evolving threat landscape, it’s critical to update compliance training programs to include AI-related scenarios and exercises. This will ensure that staff members are aware of the nuances associated with AI technologies, including privacy concerns, regulatory requirements, and ethical implications. Incorporating case studies of AI missteps and success stories into training can provide practical insights and prepare the workforce for the real-world challenges of AI implementation in healthcare settings.
Balancing the Equation
AI holds the potential to catalyze advancements in healthcare, but with great power comes great responsibility. The insights shared by the HCCA experts underscore the importance of a proactive, educated, and multidisciplinary approach to AI in healthcare compliance. As the sector continues to evolve, so too must the strategies that safeguard the ethical and lawful use of AI technology. As we navigate the complex interplay of AI, healthcare, and compliance, the words of one expert resonate: “Don’t let me do something stupid.” With careful consideration and a collaborative approach, we can harness the benefits of AI while maintaining the integrity and security of healthcare systems.
Interested in learning more about how you can integrate AI-scenarios into your scenario testing program? Visit our Resilience Leader’s Toolkit for more resources and bring a free Microsimulation home to your team!