Regulation of AI in Healthcare Utilization Management and Prior Authorization Increases
Highlights
- Over the past two years, federal and state government agencies have moved to regulate the deployment of artificial intelligence (AI) in the healthcare setting, including in utilization management (UM) and prior authorization (PA) processes used to determine insurance coverage for medically necessary healthcare items and services.
- This Holland & Knight alert provides a summary of these efforts to regulate the use of AI in UM and PA, as well as recommendations for key stakeholders, including managed care plans and UM organizations, to help ensure they remain compliant within this constantly shifting regulatory landscape.
Over the past two years, federal and state government agencies have moved to regulate the deployment of artificial intelligence (AI) in the healthcare setting, including in the utilization management (UM) and prior authorization (PA) process used to determine insurance coverage for medically necessary healthcare items and services. This Holland & Knight alert provides a summary of these efforts to regulate the use of AI in UM and PA, as well as recommendations for key stakeholders, including managed care plans and UM organizations, to help ensure they remain compliant within this constantly shifting regulatory landscape.
Federal Regulation of AI in Healthcare
On Oct. 30, 2023, President Joe Biden issued the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the E.O.). Among other initiatives, the E.O. requires the U.S. Department of Health and Human Services (HHS) to develop a strategic plan that includes policies and potential regulatory action regarding the deployment of AI in the health and human services sector. In particular, HHS is required to address the "development, maintenance, and use of predictive and generative AI-enabled technologies in healthcare delivery and financing – including quality measurement, performance improvement, program integrity, benefits administration, and patient experience." Further, HHS is required to develop an AI assurance policy to enable the evaluation of AI-enabled healthcare tools.
Prior to the E.O., on April 12, 2023, CMS issued the Medicare Program; Contract Year 2024 Policy and Technical Changes to the Medicare Advantage Program final rule (MA Policy Rule).1 This rule included provisions related to the use of AI in UM/PA processes, highlighting that Medicare Advantage (MA) organizations (MAOs) "must ensure that they are making medical necessity determinations based on the circumstances of the specific individual, as outlined at 42 C.F.R. § 422.101(c), as opposed to using an algorithm or software that doesn't account for an individual's circumstances."2 The MA Policy Rule also noted that any use of AI in healthcare, including in utilization review (UR), must adhere to the Health Insurance Portability and Accountability Act (HIPAA) and that any use of AI should ensure fair and equitable decision-making, as well as mechanisms to review and contest AI-generated decisions. The MA Policy Rule became applicable to MA coverage beginning Jan. 1, 2024.
On Jan. 17, 2024, CMS issued the Interoperability and Prior Authorization final rule.3 This rule mandates that affected payers (including MAOs) comply with new standards for coverage criteria and utilization management. In particular, the rule requires payers to implement, by Jan. 1, 2027, a "Prior Authorization Application Programming Interface" (API) to streamline the PA process. For example, the API requires impacted payers to send PA decisions to providers within 72 hours for expedited (i.e., urgent) requests and seven calendar days for standard (i.e., nonurgent) requests. MAOs can deploy AI to meet these new time limitations or to adjust levels of clinical reviewer staffing, but they must still ensure that providers are properly involved in the decision-making process.
On Feb. 6, 2024, CMS issued a frequently asked questions in which it provided further guidance on this issue. In the FAQs, CMS confirmed that MAOs could utilize AI in the PA process, provided the MAO ensures the application of AI complies with MA rules governing coverage determinations. In particular, CMS reiterated its guidance from the 2023 MA Policy Rule that while MAOs could utilize AI to assist in making coverage determinations by, for example, predicting patient outcomes such as the potential length of stay, the MAO cannot rely solely upon AI for making a determination of medical necessity, including decisions to terminate or approve a particular service. According to CMS, plans ultimately must base their decisions on the individual patient's condition as supported by clinical notes, patient history and the recommendations of patient's supervising physician.
State Regulation of AI in Healthcare
In 2024, there has been significant activity at the state level to regulate the use of AI in healthcare decision-making, including in UM/PA processes.
- Colorado: On May 17, 2024, Colorado adopted the Consumer Protections in Interactions with Artificial Intelligence Systems Act of 2023. The act applies to developers of "high risk AI systems," which includes AI systems that are used by healthcare providers to make decisions that have a "material legal or similarly significant effect on the provision or denial to any consumer of health care services." The act requires affected developers to use reasonable care to avoid "algorithmic discrimination," defined to include any condition in which the use of AI results in in unlawful differential treatment or impact that disfavors individuals or groups based on their age, color, race, ethnicity, religion, national origin, genetic information or other protected status. By 2026, the act requires developers to conduct impact assessments to measure the accuracy and fairness of their AI systems and disclose any identified defects to system users. Additionally, the act requires insurers to notify individuals of AI-generated decisions and how the AI system specifically contributed to the decision. The act also provides individuals with a right to appeal such decisions.
- California: On Sept. 28, 2024, California enacted Assembly Bill 3030 requiring healthcare providers to disclose when AI is being used in patient care and obtain explicit consent from patients before utilizing AI-powered systems. On the same date, California adopted Senate Bill 1120, which introduces regulations concerning the use of AI, algorithms or other software tools in UR. This legislation mandates that a qualified human individual must review UR and UM medical necessity and coverage determinations, ensuring that decisions affecting healthcare services are not solely left to automated systems. For a full summary of these laws, see Holland & Knight's previous alert, "The Future for AI Usage in California Healthcare Hinges on Governor's Indication of State Limits," Oct. 10, 2024.
- Illinois: On July 19, 2024, Illinois enacted H2472 amending the Managed Care Reform and Patient Rights Act. The bill requires UM programs that use algorithmic automatic processes to render adverse determinations to use evidence-based criteria that are compliant with the accreditation requirements of either the Health Utilization Management Standards of the Utilization Review Accreditation Commission (URAC) or the National Committee for Quality Assurance (NCQA). Additionally, the bill requires health plans to ensure that only clinical peers make adverse determinations regarding the medical necessity of a healthcare service. However, the bill allows either a healthcare professional or accredited automated process to certify the medical necessity of a healthcare service.
- New York: Assembly Bill A9149 was introduced on Feb. 8, 2024, and is currently pending signature by Gov. Kathy Hochul. The bill mandates significant oversight and transparency in the use of AI in UM. In particular, the bill requires health insurers to conduct clinical peer review of AI-based decisions and disclose their use of AI on their websites. Furthermore, the bill establishes a certification process whereby health insurers would be required to submit their algorithms and data sets to the state's Department of Financial Services for certification that they will not result in discrimination to protected classes of individuals and otherwise adhere to clinical guidelines.
Takeaways for Key Stakeholders
- Monitor Regulatory Developments: It is crucial for managed care plans, UM organizations and other UM/PA stakeholders to stay informed about the latest federal and state regulations concerning the use of AI in UM/PA activities. This includes consistent monitoring of legislative developments at the state level and assessing the success of attempts to regulate the use of AI through these bills and the timeframe for enactment and compliance. In addition, while many MAOs and downstream entities have solid processes for implementing new CMS regulations and guidance, it is also important to be aware of any AI-related developments driven by other federal agencies, including through the larger HHS organization, Office of Inspector General (OIG), Office for Civil Rights (OCR), U.S. Department of Justice (DOJ) or out of the executive branch or Congress. As just one example, the DOJ's Criminal Division has revised its Evaluation of Corporate Compliance Programs guidance to incorporate several AI-driven assessment questions. For a more detailed explanation, see Holland & Knight's blog post, "New DOJ Compliance Program Guidance Addresses AI Risks, Use of Data Analytics," Oct. 30, 2024.
- Evaluate Current Processes and Impacts: UM/PA is often complex, with multiple hand-offs between internal and vendor teams that each perform key functions within the overall process and across a wide range of managed care plan types, all that are regulated under different laws and regulations based on program and location. It is important for managed care plans in particular to have a thorough understanding of the pieces and mechanics behind this process and which rules may apply. If one type of program is subject to state laws that require a clinical reviewer to issue an adverse determination, will the plan take a conservative approach to require all of its coverage determinations to be reviewed by a clinical reviewer, even if not required in a specific state? If not, how will the plan ensure that the rules are being applied correctly in a system or platform that processes UM/PA requests across all services areas and lines of business? Can AI-driven functionality be applied selectively across specific reviews or cases?
- Carefully Integrate and Assess AI Functionality: Compliance efforts need to be implemented across the two key phases for AI integration in UM/PA, both at the design/implementation phase and on a continual basis. AI-driven solutions must be carefully tested prior to implementation to ensure any "learnings" acquired by the AI platform are based on a representative sample of use cases and scenarios to guard against inaccurate or discriminatory results. There should be a well-documented plan to scale these solutions with appropriate quality checks that may compare, for example, the results of an AI-driven coverage determination side-by-side with determinations issued by live clinical reviewers. Once the AI technology "goes live," stakeholders should continue to monitor key metrics such as decision accuracy, timeliness, patient and provider complaints and other elements to detect any signs of weakness in the revamped process. Stakeholders should also develop AI-specific policies, protocols and trainings to ensure its personnel fully understand the benefits and risks of this technology and their roles in ensuring successful implementation.
- Collaborate with Stakeholders: Stakeholders should seek opportunities to engage with regulators, healthcare providers, patient groups and technology experts to help navigate the complexities of AI in healthcare and understand practical implications. As UM/PA itself has been subject to increased regulatory and media scrutiny over the past couple of years, collaboration can also foster the development of best practices for the ethical and effective use of AI in UM/PA. This is a new regulatory and technological landscape, and regulators are still catching up. In the meantime, stakeholders should be committed to setting strong industry standards to build greater trust and reliability in their platforms.
Conclusion
The regulatory environment surrounding the use of AI in healthcare, particularly in UM/PA, is rapidly evolving. Insurers must remain vigilant and adaptable, ensuring that their AI applications and processes for conducting UM/PA are compliant with the latest regulations. Holland & Knight's Healthcare & Life Sciences Team will continue to monitor for any new federal and state initiatives governing the use of AI in UM/PA – and more generally within the healthcare sector – and are ready to assist as you further develop and implement AI-driven solutions.
Notes
1 E.O., Sec. 8(b)(i)(A).
2 88 Fed. Reg. 22120, 22195 (April 12, 2023).
3 89 Fed. Reg. 8758 et seq.
Information contained in this alert is for the general education and knowledge of our readers. It is not designed to be, and should not be used as, the sole source of information when analyzing and resolving a legal problem, and it should not be substituted for legal advice, which relies on a specific factual analysis. Moreover, the laws of each jurisdiction are different and are constantly changing. This information is not intended to create, and receipt of it does not constitute, an attorney-client relationship. If you have specific questions regarding a particular fact situation, we urge you to consult the authors of this publication, your Holland & Knight representative or other competent legal counsel.