November 6, 2024

Podcast - Decoding the Future of AI Regulation and Frontier Models

The Two Byte Conversations Podcast

In this episode of the "Two Byte Conversations" podcast series, Data Strategy, Security & Privacy attorney Kevin Angle discusses developments in artificial intelligence (AI) policy at the state, federal and international level with Ben Rossen, Associate General Counsel for AI Policy and Regulation at OpenAI. They discuss the varying definitions of "frontier models" among countries, AI regulation through the lens of privacy and the movement toward "agenetic" AI.

Listen and subscribe on Amazon.
Listen and subscribe on Apple Podcasts.
Listen and subscribe on SoundCloud.
Listen and subscribe on Spotify.
Watch and subscribe on YouTube.

Kevin Angle: Today, we're going to discuss developments in AI policy. Artificial intelligence is at the forefront of the regulatory agenda. Several bills were recently signed into law in California around AI transparency and watermarking. Although Governor Newsom vetoed a contentious bill applicable to large-scale frontier models, legislation has also been adopted in Colorado and Utah, and, of course, comprehensive privacy laws have significant requirements applicable to automated decision-making technologies. I'm Kevin Angle, senior counsel in the Data Strategy, Security & Privacy practice here at Holland & Knight. And my guest today is really at the forefront of all of these developments. Ben Rossen is Associate General Counsel for AI Policy and Regulation at OpenAI. Before that, he was special counsel at Baker Botts and a longtime attorney at the FTC, where he focused on privacy and consumer protection. Even before that, he had the distinct honor of serving as a co-clerk with me on the Eastern District of New York. Really probably the honor of his life, if I can speak for him. Welcome, Ben.

Ben Rossen: Thanks, Kevin. It's good to be here.

Kevin Angle: So I mentioned the California bills. We do not need to go into the details on those, but states have been at the forefront of AI legislation in a lot of ways. What do you see as the benefits and the risks of that approach?

Ben Rossen: Yeah, it's a great question. I mean, one, I think the states are not going to wait around. And I think we're absolutely going to see a world where when the legislative sessions for the next year open in a few months, we're just going to see an absolute flood of state legislation cutting across a whole range of policy issues. And, you know, I think there are places where it is stronger and maybe weaker from a policy rationale for the states to get involved. You know, the frontier regulation issue is a really tricky one. So that was obviously the more controversial bill that Governor Newsom just vetoed. You know, of the 400 or 500 state bills that I think were proposed in different legislatures this year, that was actually the only one that touched on frontier regulation. It was kind of an outlier compared to the issues that most of the states have been working through. And I think what we're going to see more and more of are certainly the framework that has been created around kind of consequential decisions in the automated decision-making context. The Colorado bill that will go into effect, you know, pending potential additional changes after Governor Polis signed it. You know, that bill is really focused on this framework of regulating high-risk uses of AI that can result in algorithmic discrimination in certain consequential decision areas. You know, kind of similar to the Fair Credit Reporting Act context of, you know, these high-risk areas of lending and credit eligibility and employment, education, etc. You know, we're expecting to see a lot more of that. That bill spun out of a multistate process led by Connecticut State Senator Maroney and his bill, SB 2, that didn't pass last year, but will certainly be coming back in the next session, along with what I expect will be a pretty vigorous, you know, multistate coalition that'll be working on a lot of those issues. So that's one area where I know we're expected to see quite a bit.

Kevin Angle: Yeah. No, absolutely. I mean, that was a fascinating bill. When you look at it, just the documentation requirements that are embedded in the Colorado bill, it could presumably present a lot of challenges for companies to really meet all those rules.

Ben Rossen: The Colorado bill is really focused on the deployers of a lot of these high-risk technologies, even though it does add some requirements for the developers as well to provide appropriate transparency and technical documentation, etc.

Host Note: Regulations like the EU’s AI Act and the California bills put most of their emphasis on the companies that are building or modifying AI systems, whereas the Colorado law, which goes into operation in February 2026, focuses many of its requirements on deployers, in other words, the users of so-called high-risk systems.  That includes its risk management framework, data protection impact assessments, and, of course, its duty of care, which it makes applicable to both.

Ben Rossen: But I think one of the areas where, you know, there's going to be some more discussion is kind of what really constitutes substantially modifying a foundation model. What is the specific role for integrators of some of this technology, who are really going to be the ones that are driving a lot of the innovative use cases, you know, some of which are almost certainly to result in kind of consequential decisions.

Kevin Angle: Yeah. Before we go on, we keep using this term that I want to define for people. So frontier models, these are large-scale models. Can you, just for the listeners, describe what a frontier model is? And I know it varies by law.

Ben Rossen: Yeah, sure. I mean, so the actual legal framework around a frontier model I think is still relatively disharmonized, I guess. Frontier models are what OpenAI is, is working on. I mean, we are a lab that focuses on, you know, putting out cutting-edge research with the most capable models that also lead on safety and are capable of solving, you know, harder and harder problems as we move towards artificial general intelligence, which is the mission of the company. And frontier models right now have been defined under law in the EU. It's models that are over a certain amount of compute. They've chosen 10 to the 25th of floating point operations per second, which is basically a, you know, a large volume of compute somewhere around, you know, the level of models that are already out on the market. In the United States, the president's executive order was based off of models that are over 10 to the 26th, one exponent higher, but a massive amount of data larger. And right now, compute is basically just a proxy for the potential capabilities of the model. And that is something that works reasonably well right now, may not be a regulatory framework that is able to, kind of, hold up in the future as compute gets more efficient. And a lot of these models are really heavy on inference, which is the, you know, as the model is actually running the compute that is powering the decision making. But right now, it's a reasonable proxy for how powerful these models are, just kind of based off of the size and scale of them.

Kevin Angle: And you mentioned at the state level, you're expecting to see more legislation around consequential decision making as maybe one of the more likely paths to legislation will take. Obviously, the executive order is, I think you just mentioned, is focused on frontier models. Do you expect to see more developments there on the federal level? Is that possible? Would that only be through the executive, or might there be federal legislation there as well?

Ben Rossen: Yeah, it's a great question. I mean, you know, in an election year right now, the odds of seeing the federal government do really significant legislative work, I think it's just, it's just a challenge. I mean, we've all seen this with privacy and the difficulties in getting a comprehensive privacy bill into law. I think it's, you know, it's hard to be bullish on Congress accomplishing a major AI legislative package, at least before the election. That said, I mean, there are a lot of developments at the federal level. Both the AI Safety Institutes, which have been stood up, you know, the USAC, we were, you know, very proud to be able to reach an agreement with them to collaborate on testing of some of our newest models and work on developing kind of common standards for evaluations on how to measure the capabilities and risks of these models, which, frankly, is a step that is so important right now because the science around doing all of these evals is still really assent. And any regulatory framework that's going to evolve kind of needs to be based off of a grounding in the science of how do you actually measure these capabilities. So there's work that's happening there. And there's also, you know, under the president's executive order, there are reporting requirements for labs that are building frontier models that are over 10 to the 26th. BIS within the Department of Commerce has been charged with doing that. They recently announced a proposed rulemaking to kind of codify those requirements, which is, you know, still at the kind of preliminary rulemaking stage. But there's certainly going to be continued developments at the federal level. I think it's just unlikely that it'll result in, you know, a major legislative package on kind of comprehensive AI regulation.

Kevin Angle: So we both were or are privacy professionals. I think I think I still am. I don't know if you might say you're a recovering privacy professional.

Ben Rossen: I still have my membership at the IAPP. I think doesn't that stand for privacy anymore.

Kevin Angle: That counts. Is AI regulation a privacy issue?

Ben Rossen: It's a great question. There are certainly aspects of it that are. You know, I think privacy professionals especially, and privacy regulation that focuses so much on the kind of clear governance of data and how it's controlled within an organization. There are absolutely a lot of overlaps there between how companies, especially companies that are different from OpenAI, but are the ones who are taking these various models and incorporating them into their business processes and powering them with their own data, there's an enormous amount of overlap with privacy there. I don't think that privacy — look, like the Venn diagram is not a perfect circle between privacy and AI, and there are certainly areas, especially once you get towards issues that labs like OpenAI are confronted with, whereas you start thinking about the kind of long-term development of AI, catastrophic risk, etc., there's a different set of issues that arises there. And there are other things like kind of provenance and watermarking issues and some of the transparency issues that go along with AI that are a little bit different. But privacy professionals are well suited within a lot of organizations to step up and play that role of ensuring that the governance mechanisms are there before you start adding this technology without thinking about the potential impacts.

Kevin Angle: Going back a little bit to some of the regulatory issues we were talking about. We were speaking about state legislation and regulation and federal legislation and regulation. Of course, there's an international dimension to this too. The EU AI Act of course gets a lot of publicity, as it well should. Is there a risk that international regulatory frameworks will diverge in a way that's harmful to innovation?

Ben Rossen: Yeah, there is a real risk that that can happen. You know, right now it is not clear really whether there will be a truly harmonized framework that evolves. Obviously the EU is in the lead right now. The AI Act is of course going to have a profound impact on how AI tools are built and evaluated and released on the market. You know, a lot of what's in the AI Act really needs to get figured out, especially for the general purpose AI model, has to get figured out during this code of practice process that is supposed to run over the next nine months. It's just now kicking off, where many of the specific details of how these models will work need to be fleshed out into the law. So there's still a lot of ambiguity there in terms of how it will actually operate. I certainly hope that there will be a process to kind of harmonize this with, you know, both international standards that have started to develop. There's a lot of standards making work that's happening simultaneously and a lot of, you know, non-regulatory bodies. There's also, you know, certainly an enormous amount of work that NIST and the risk management framework has done, which I think a lot of companies have already started to look to as kind of the sort of clear gold standard of how to implement a lot of these tools. So I hope that there will be an effort to harmonize these issues. It's really important, I think, for innovation in order to do that. You know, the big companies will find a way to comply. But as the ecosystem is still growing and expanding, you know, to reduce the burden on a lot of the smaller companies, I think it's really important that there be kind of clear, harmonized standards because all of this technology is going to cut across borders.

Kevin Angle: I ask this to all my guests. How can lawyers foster innovation?

Ben Rossen: How can lawyers foster innovation? That's a great question. I mean, I will say far and away, you know, the thing that every lawyer told me when I was coming up at firms was, you know, how you need to understand your client's business as well as they do to really be helpful. And it never really sunk into me what that meant until I was in-house, because lawyers that really understand the pain points and the things that you're struggling with and that, you know, the board of directors is thinking about, etc., and, and, and really get it from the perspective of the company just adds so much value. And that's how you're able to like really unlock the things that are going to help companies innovate by thinking proactively and seeing the roadblocks way in advance, all of that kind of strategic thinking.

Kevin Angle: One last fun question for you, Ben. Will artificial intelligence ever have rights?

Ben Rossen: It's a really good question. I think it's possible. You know, we're a long way from there, but maybe not as far away as people think. It's interesting because, you know, one of the things that I find most fascinating in my job is the opportunity to be able to talk with so many researchers who are thinking years ahead about what AI is going to look like. And, you know, I think a lot of the thinking has evolved from maybe, you know, five, 10 years ago that there would be this sort of single artificial general or artificial superintelligence. Whereas I think it's a lot more likely now that there will be kind of many different pockets of AGI that maybe interrelate with each other and, you know, are they conscious? Will they have rights? Unclear, but I think that there is going to need to be some serious thinking about how that works.

Kevin Angle: I think a lot about it. We've seen the decisions coming out of California about the California age appropriate design code and how focused they are on the First Amendment. And then we have generative AI, which is speaking in a way. You know, I think we're a long way from saying that OpenAI is some sort of conscious being that is speaking. But there could be a point where you're thinking about free speech rights in the context of artificial intelligence.

Ben Rossen: Yeah. I mean, I think, you know, it gets back to sort of fundamental thinking about what is consciousness, right? And there's going to be, you know, how the computers think is fundamentally different. There's plenty of people right now who will say that it's that they're not thinking that these are, you know, neural nets that are running computations. It's going to be, it's going to be fascinating to watch. I mean, like "generative AI," I often say that that term is not going to survive all that long because the goal of a lot of these models is not to just like generate content, but it's really to be a kind of personal assistant that can act on your behalf. And this movement towards "agenetic AI," agents that can actually execute tasks on behalf of a person and automate things. And that's really the next kind of frontier where I think a lot of the work is going to move to. And it'll be kind of fascinating to see how that how that evolves.

Kevin Angle: I'm fascinated. It's really pretty interesting. So thank you, Ben, so much for joining the podcast. We really appreciate it.

Ben Rossen: Yeah, my pleasure.

Related Insights