October 10, 2024

The Future for AI Usage in California Healthcare Hinges on Governor's Indication of State Limits

Holland & Knight Alert
Jennifer Rangel | John T. Vaughan | Shalyn Watkins | Danielle A. Giaccio

Highlights

  • California Gov. Gavin Newsom recently signed two bills regarding artificial intelligence (AI) regulations in the healthcare sphere.
  • The first bill (SB 1120) requires a qualified human individual to conduct a review of utilization review (UR) and utilization management (UM) coverage determinations for insurance qualification based on medical necessity.
  • The second bill (AB 3030) requires that all patient communications created by generative AI regarding or otherwise including clinical information contain a disclosure informing the patient that the message was generated using AI.
  • Despite California's expansion of regulatory oversight for AI development, Newsom vetoed the Safe and Secure Innovation for Frontier AI Models Act as proposed. His veto message indicated an intent to avoid stagnation of AI technological development by narrowing regulatory oversight within the legislation.

From the West Coast Healthcare Desk

With the surge of artificial intelligence (AI) development in recent years, state legislatures, including California's, have contemplated how to balance patient safety and quality of care with the need for and expectation of efficiency in healthcare delivery and implementation. Holland & Knight previously discussed a landmark Texas settlement regarding the use of generative AI in healthcare and now turns its attention to California Gov. Gavin Newsom, as he recently drew a line in the artificial sand when he approved Senate Bill (SB) 1120 and Assembly Bill (AB) 3030 but vetoed SB 1047. Though all three bills regulate the use of AI software, each strives to do so in a different manner, with SB 1120 regulating the use of AI during the utilization review (UR) process, AB 3030 establishing patient notification requirements and SB 1047 contemplating safe innovation. These bills demonstrate that California is forging a path for the regulation of AI in healthcare, but what lengths the state will take to oversee this emerging sector and how far it will go remain open questions. At a minimum, the recent enactments of SB 1120 and AB 3030 are expected to impact health care providers, insurers and vendors.

AI Software During the Utilization Review Process (SB 1120)

Health plans approve or deny coverage for services through the UR and utilization management (UM) processes. When determining medical necessity of a treatment, health plans began to use AI upon its development and implementation to expedite UM and UR determinations. However, since these determinations are patient-specific, California has limited the use of AI in UR and UM processes in an effort to increase and standardize clinical consistency and equitability among insured patients. Health care service plans utilizing AI or similar software for UR are now required to, in part, abide by the following:

  • UR determinations must be based on relevant applicable criteria – including clinical history, circumstances and recorded information – rather than a reliance on a group dataset.
  • The healthcare provider must be the ultimate decisionmaker, and any AI or other software is fairly and equitably applied to prevent harm or discrimination against enrollees.
  • The AI or other software must be periodically reviewed and open to inspection with any revisions implemented following review to heighten the technology's accuracy and reliability.
  • Plans must make certain disclosures regarding the use and oversight of AI or other software in the written policies and procedures establishing the UR process.

On Sept. 28, 2024, Newsom approved SB 1120, which potentially may set precedence nationally regarding AI-based regulations on insurers for the expedition and efficacy of coverage determinations. The California Hospital Association spoke out in support of the legislation, deeming AI tools helpful in streamlining coverage determinations but lacking "the ability to recognize and accommodate an individual patient's unique circumstances." Though AI may generally be utilized while performing UR and UM, it may not determine medical necessity, as such task requires clinical judgment and can be completed only by a licensed physician or other qualified professional. The bill appears to be in response to recent consumer class actions against renowned commercial insurers for their use of AI within the UR process. It is possible the ramifications of SB 1120 will prove to be extensive, such that instead of AI streamlining coverage decisions, the human review will effectively nullify the expediency AI once permitted. This potential burden on insurers and health plans appears weighted against the ethical obligations on providers to exercise clinical judgment in patient care and balanced against the industrial need for standardization of claims in UR. For insurers utilizing AI in California, the medical director overseeing such process will need to address the means of AI usage in UR to ensure that final determinations of medical necessity are properly made on a qualified human, and not technological, level.

Generative AI for Patient Communications (AB 3030)

Further demonstrating the state's focus on AI in healthcare, Newsom also signed AB 3030 into law on Sept. 28, 2024. The bill imposes significant disclosure requirements on any health facility, clinic, physician's office or other group practice utilizing generative AI for clinical-based patient communications. Compliance with the Health Insurance Portability and Accountability Act (HIPAA) and California Confidentiality of Medical Information Act also should be considered in regard to patient communications in the marketing context. Specifically, the following is required for any communications involving patient clinical information:

  • Disclaimer. The disclaimer must inform the patient that the communication was generated by AI. For written communications, the disclaimer must appear at the outset of the communication if physical or digital media and must further remain throughout if a continuous online interaction. For audio communications, the disclaimer must be made verbally upon the initiation of the interaction. For video communications, the disclaimer must be displayed throughout the entirety of the interaction.
  • Clear Instructions. There must be clear instructions provided as to how a patient is able to contact an individual with the facility or provider.

The sole exception to the disclaimer and clear instruction requirement is if the AI-generated communication is first read and reviewed by a licensed or certified (human) healthcare provider. As such, the passage of this bill may have implications on healthcare providers' utilization or adoption of AI as it will require the above measures to be established and upheld whenever generative AI is utilized for patient communications relating to clinical information. Importantly, the requirement does not just extend to the first communication with the patient, but to every single AI-generated communication with the patient thereafter regarding clinical information. The burden of disclosure, as well as whether such disclosure may impact patients' selection of care amongst providers, will emerge as California providers review their communications policies and incorporate the appropriate disclaimers and clear instructions to ensure compliance with the new law.

Safe and Secure Innovation Vetoed (SB 1047)

Although SB 1120 and AB 3030 both enact more stringent requirements on healthcare providers and insurers regarding AI use, the governor drew a line at regulatory oversight where the intended use of AI models has not been contemplated. In fact, Newsom clarified in his recent veto of the Safe and Secure Innovation for Frontier AI Models Act (SB 1047) that its requirements were overly stringent as the legislation failed to account for the circumstances surrounding the use of the AI system and specifically whether it involved "critical decision-making or the use of sensitive data." Specifically, this law would have imposed harsh regulations that would have been expensive to implement, including cybersecurity protections for unauthorized access or misuse, shutdown capabilities for disruptions, safety protocols to minimize and prevent unreasonable risk of critical harm, reevaluation of safeguards and procedures, annual compliance audits and periodic incident reporting.

Though such extensive regulations have not yet been enacted in California, the legislature has indicated that it is only a matter of time until a similar, perhaps less stringent, bill achieves passage. Newsom's statement explaining his veto indicates California is willing to regulate AI safety more aggressively, but he would prefer that the federal government develop a uniform national standard. Though the attention around AI includes concern that this new technology might have the potential to mislead the public, allow the proliferation of deepfakes and other forms of misinformation, fear about potential harms caused by AI remains mostly hypothetical. California, which has been the world's leading incubator of technological advancement for generations, will gather more information concerning actual harms caused by AI before taking more targeted action.

Applications of AI in defense, healthcare, financial technology (FinTech) and critical infrastructure are likely to raise targeted attempts for the regulation of their AI platforms and debate regarding safeguards for comprehensive AI. Developers should adopt protocols protecting user privacy, work to reduce hallucinations (particularly with sensitive applications of the technology) and consider providing greater transparency about their platforms. Although the bill was vetoed, it was popular with a majority of Californians, averaging 56.9 percent in a recent poll, suggesting that the public is concerned about AI and would support greater regulation of the technology. While the technological industry currently has a window of opportunity to show policymakers and the public that it can self-regulate, that window proves to be closing.

Looking Ahead

In the coming months, technology and healthcare companies in California can expect to see newly proposed legislation focused on intended use and consequential decision-making that may continue to increase healthcare regulations in the AI landscape beyond SB 1120 and AB 3030. Additionally, with SB 1120 and AB 3030 now enacted, healthcare entities and insurers in California must begin disclosing their use and reliance on AI in UR and patient communications. Moreover, it is likely that other states will contemplate and adopt AI-focused regulations in the healthcare industry in the near future.

For more information or questions, please contact the authors.


Information contained in this alert is for the general education and knowledge of our readers. It is not designed to be, and should not be used as, the sole source of information when analyzing and resolving a legal problem, and it should not be substituted for legal advice, which relies on a specific factual analysis. Moreover, the laws of each jurisdiction are different and are constantly changing. This information is not intended to create, and receipt of it does not constitute, an attorney-client relationship. If you have specific questions regarding a particular fact situation, we urge you to consult the authors of this publication, your Holland & Knight representative or other competent legal counsel.


Related Insights