Podcast - Part 2: An FTC Official Speaks About the Regulation of AI Technology
In this episode of his "Clearly Conspicuous" podcast series, "Part 2: An FTC Official Speaks About the Regulation of AI Technology," consumer protection attorney Anthony DiResta continues his discussion on the Federal Trade Commission's (FTC) approach to regulating artificial intelligence (AI). The Biden Administration's executive order on AI directs the FTC to use its authority to protect consumers from bias and discrimination in AI technology. Mr. DiResta explains that companies may face liability for AI vendors and contractors, so they should implement compliance measures like pre-release assessments, transparency, vetting vendors, employee training and monitoring. He also shares that as AI regulation evolves, the FTC will continue using its unfair practices jurisdiction to prosecute harmful AI applications, while more comprehensive federal and state AI laws may emerge.
Good day. This podcast is part two in a series concerning the regulation of artificial intelligence by the Federal Trade Commission, and what we learned from an interview I did with Mr. Michael Atleson, a senior attorney with the FTC.
Biden Administration Executive Order on AI
Let's talk about federal policy. The Biden Administration is highly concerned about the risks of AI, as detailed in its October 30, 2023, Executive Order on Safe, Secure and Trustworthy Artificial Intelligence. The executive order seeks to establish new standards for AI safety and security, protect Americans' privacy, advance equity and civil rights, protect consumers and promote innovation. And, importantly, the executive order directs certain agencies, such as the Department of Homeland Security and the Department of Energy, to advance AI safety and address AI systems' threats to critical infrastructure. With respect to the FTC, the executive order did not explicitly direct the commission to take specific actions. But that said, in the context of irresponsible uses of AI that result in, say, bias and discrimination, the executive order signaled that the FTC should use its existing authority to protect consumers' rights. A fair reading of the executive order suggests that the FTC should continue to regulate AI within the scope of its jurisdiction under the FTC Act.
Bias and Discrimination in AI Technology
So now let's talk about potential for bias and discriminatory uses of AI. As indicated by the executive order, AI outputs can sometimes be biased or discriminatory. It is well documented, but AI systems have discriminated, often inadvertently, with respect to individual immutable characteristics, including race, ethnicity, gender and language.
But what triggers AI bias? There are a number of reasons how an AI system can discriminate. AI systems sometimes behave this way because the bias is embedded in the data on which the algorithm was trained. Other times, an AI system may discriminate because its underlying model is being used for something other than its original purpose. In either case, a company runs the risk of violating the law.
From the FTC's perspective, a company that uses an AI system that results in disparate treatment can be prosecuted under Section 5 of the FTC Act based on an unfairness — not a deception, but an unfairness theory. For example, the FTC brought an action against Passport Automotive and obtained a settlement of $3.3 million that's going to be refunded to consumers, where the company engaged in lending practices that regularly charged African Americans and Latino customers more in financing costs and fees. The FTC, along with the Department of Justice, Equal Employment Opportunity Commission and the Consumer Financial Protection Bureau, are all highly concerned about unfairness in AI, as detailed in their April 25, 2023, Joint Statement of Enforcement Efforts Against Discrimination and Bias in Automated Systems. It's a joint statement. So what about interagency cooperation? As a general matter, and as evidenced by the recent joint statement, federal agencies communicate and cooperate together in support of federal directives. For example, in the context of AI, the FTC is conducting a joint effort with the CFPB to collect information about tenant screening tools, including whether the underlying algorithms can have an adverse impact on underserved communities. In addition, the FTC is closely coordinating with state attorneys general, given the FTC's loss of its ability to seek disgorgement following the Supreme Court's ruling in AMG Capital Management, LLC.
Monitoring and Disclaimers for Vendors and Contractors
So let's move to another topic: dealing with monitoring and disclaimers. Under the FTC Act, companies may be liable for what vendors or contractors do on their behalf. This means the companies have an implied duty to vet and monitor the third parties they engage. Whether a company regularly monitors its vendors and contractors is an important factor in enforcement discretion. In other words, if a company's AI system results in consumer harm, the FTC will investigate whether the company monitored both the product and its vendors and contractors. A showing of diligence and continuous monitoring practices may dissuade the FTC from prosecuting or, at the very least, minimize the remedy some. Disclaimers can be used when marketing AI, as long as they are clear and conspicuous. The extent to which a disclaimer limits liability is narrow, similar to disclaimers and waivers in the context of, say, tort claims. To put it another way, a disclaimer cannot cure blatant deception or harm that the consumer cannot reasonably avoid.
FTC Enforcement Approaches
So what about enforcement? During the course of an investigation and negotiations, the FTC considers injunctive relief and monetary relief, and both forms of remedies. In this context, injunctive relief comes in the form of requiring companies to implement certain compliance provisions in their AI programs. If appropriate and legally available, monetary relief also comes in in the form of a civil penalty. Does the FTC have any recourse against the technology itself? In a 2021 commission statement, former FTC Commissioner Chopra stated that no longer allowing "data protection law violators to retain algorithms and technologies that derive much of their value from ill-gotten data is an important course correction." Based on this directive, the FTC now seeks algorithmic deletion as a remedy in this enforcement action. For example, the FTC brought actions against Everalbum and WW International and Kurbo, in which the commission successfully required those companies to delete both the data collected — photos and children's information — and the algorithms derived from the data.
Best Practices for Companies to Limit Liability
OK. Again, we've covered quite a bit in these last podcasts. What about the best practices? What safeguards can companies implement to limit their liability? The FTC recommends reviewing its recent policy statement on biometric information. While the statement deals with biometrics, its guidance can be readily applied and available to AI systems. In a nutshell, the FTC believes that AI best practices include:
- conducting pre-release assessments concerning foreseeable harms
- taking steps to mitigate the risks of those harms and not releasing the product initially if those risks could not be mitigated
- being transparent to consumers regarding the collection and the use of the data
- evaluating vendors' capabilities to minimize risks to consumers
- providing appropriate training for employees and contractors whose job duties involve interacting with AI systems and their related algorithms
- conducting ongoing monitoring of AI systems to ensure that their use is operating as intended and not likely to harm consumers
Companies must remember that the FTC Act does not expressly outline the standard of reasonable foreseeability. In other words, the commission does not have to prove intent. Let me say that again — the FTC does not have to prove intent. That said, under a theory of unfairness, the FTC will consider the reasonableness of a company's conduct — what the company knew about its AI system, what it should know and what steps it took to mitigate the risks and remedies and the harm — in its discretion to prosecute a company.