May 1, 2024

Podcast - An FTC Official Speaks About the Regulation of AI Technology

Clearly Conspicuous Podcast Series

In this episode of his "Clearly Conspicuous" podcast series, "An FTC Official Speaks About the Regulation of AI Technology," consumer protection attorney Anthony DiResta dives into insights from Michael Atleson, a senior attorney at the Federal Trade Commission (FTC), on the agency's approach to regulating artificial intelligence (AI). This episode examines how the FTC uses Section 5 of the FTC Act to prosecute unfair or deceptive practices involving AI technology, such as exaggerating AI capabilities or using AI for deception like deepfakes.

Good day and welcome to another podcast of Clearly Conspicuous. As we've noted in previous sessions, our goals in these podcasts is to make you succeed in this current regulatory environment that's very aggressive and progressive, make you aware of what's going on with the federal and state consumer protection agencies and give you practical tips for success. It's a privilege to be with you today.

Today, we discussed the regulation of artificial intelligence by the Federal Trade Commission. And all of the points stated in this podcast today come directly from an FTC official. Michael Atleson, a senior attorney for the FTC, joined me and one of my colleagues for a webinar presentation on the AI regulation. And these takeaways apply to businesses in every single industry sector, from financial services to healthcare product manufacturers to retailers and many others.

All that use AI systems or tools in the course of business are marketing their AI systems. We hosted Michael Atleson, a senior attorney for the FTC, for a webinar presentation on November the 8th, and he's been with the FTC for nearly two decades and he serves as a staff attorney with the FTC Division of Advertising Practices. During the interview, Mr. Atleson was asked dozens of questions covering a broad range of topics covering AI and best practices, including the working definition of AI, the FTC's philosophy concerning AI, the legal basis for the FTC to regulate AI, recent enforcement actions concerning unfair deceptive use of AI, federal directives — including the Biden Administration's recent executive order — bias and discrimination, cooperation among agencies, the duty to monitor AI products, liability and available relief to consumers and the government as a result of an enforcement action, risk management and the future of AI regulation. So we really covered quite a bit.

How Companies View AI vs. How the FTC Approaches It

Let's step back a moment and get some background here and some context. The use of AI is increasingly prevalent in every industry sector of the U.S. economy, including financial services, healthcare and life sciences, retail, technology, hospitality and tourism, transportation, education, media, telecommunications and manufacturing. So let's start right in on square one, talking about the definition of AI and how the FTC sees it. Obviously, AI, artificial intelligence, is a fairly ambiguous term. It means different things to different people. The FTC does not employ one official definition of AI. That said, federal sources tend to ascribe to AI broad definitions. AI goes beyond just chat box. AI encompasses algorithms in the form of tools and systems that utilize computations and predictive coding used by a variety of industries in the regular course of their business.

From the FTC's perspective, the focus is on AI in the marketing space as many companies market their AI capabilities, which has the potential to harm consumers. OK, so what is the FTC's approach? The FTC views AI through the lens of its own mission: consumer protection. In other words, the FTC wants companies to confront the hard questions surrounding AI's impact, its value and its potential negative consequences. In contrast, companies often view AI through the lens of the technology itself. However, the FTC cautions that companies must acknowledge the risks of AI and remember that the data collected and processed at the end of the day is information about consumers.

The FTC's Legal Basis for Taking Action and What It's Done So Far

So what are the legal underpinnings? How can the FTC do this? Section 5 of the FTC Act prohibits unfair or deceptive advertising practices in or affecting commerce. This is a broad and obviously flexible provision, under which the FTC actively prosecutes companies that deploy AI in a harmful or deceptive manner. The FTC emphasizes that Section 5 is more than sufficient to regulate AI, no further legal that is required. Thus, any marketing or use of AI must adhere to the long established principles found within Section 5. In the case of deception, the FTC has identified two common scenarios:

One is the instances in which the companies exaggerate the capabilities of AI as their selling point. The fake AI problem, if you will.

The second scenario is instances where AI is solely deployed to deceive the consumer through deepfakes, which includes using cloned voice and language models to develop phishing messages.

Importantly, the FTC has brought several enforcement actions against companies engaging in the harmful use of AI in other algorithms, including luring consumers to invest in online stores by using deceptive claims that the companies' AI ensure success and profitability. The FTC sued Automators for claiming that its AI machinery was trained to maximize revenue, which would help users achieve over $10,000 per month in sales. Also, promoting smart devices that claim to treat health. The FTC sued Physicians Technology for falsely claiming that its low-level therapy device emitted infrared and visible light to diagnose and treat chronic pain and reduce inflammation. And then there's overstating the benefits of automated AI investment services. The FTC sued D.K. Automation for pitching supposed cryptocurrency investment services that were the "number one secret passive income crypto trading bot," which the company claimed could generate profits while you sleep. Deceiving consumers about the use of facial recognition technology and associated retention policies as well. The FTC sued Everalbum for applying facial recognition technology to customers' content despite promising not to use the analogy unless the customer affirmatively chose to activate the feature. In addition, the company failed to keep its promise to delete the content of customers who do voice activate their accounts and instead retain the data indefinitely. That's a lot to digest.

Key Takeaway

Here's the key takeaway. The FTC is serious about regulating and enforcing laws impacting AI technology. They are a major influencer right now in developing policy for governmental agencies and legislation. In our next podcast, I'll tell you more about what Michael Atleson had to say. Until then, I wish you continued success and meaningful day. Thank you.

Related Insights