April 8, 2025

Podcast - The “I” in FOCI and AI: Innovation, Intelligence, Influence

Are We All Clear? Facilitating Security Clearances

In the 20th episode of "Are We All Clear? Facilitating Security Clearances," host Molly O'Casey is joined by John Metz, a product manager at Agile Defense, and Antonia Tzinova, the head of Holland & Knight's CFIUS and Industrial Security Team. The trio discusses the impact of artificial intelligence (AI) in the industrial security space and its potential to help companies mitigate risks of Foreign Ownership, Control, or Influence (FOCI), insider threats and cybersecurity challenges. Mr. Metz touches on some helpful aspects of AI, such as automated red teaming, supply chain risk management and identifying potential intellectual property (IP) theft. However, he also clarifies that there are more dangerous uses of this technology, in particular deepfakes and autonomous malware development. Overall, the group agrees that AI needs to continue being used and developed and emphasizes that it should be viewed as a tool instead of a human replacement.

Listen to more episodes of Are We All Clear? here.

Molly O'Casey: Welcome to the 20th episode of Are We All Clear? The Podcast on Facilitating Security Clearances. I'm your host, Molly O'Casey, an international trade associate with Holland & Knight's Washington, D.C., office. Today's episode will discuss the "I" in FOCI and AI, specifically in terms of innovation, intelligence, influence connected to AI. Today's guest speaker is John Metz. John is a product manager at Agile Defense, a Virginia-based defense and national security-focused government contracting firm specializing in digital transformation, cybersecurity and intelligence. I'm also joined by Antonia Tzinova, a Holland & Knight partner in Washington, D.C., who leads H&K's CFIUS and Industrial Security Team. Welcome to the podcast, y'all. So in my world — that is, the regulatory compliance world — when we think about "industrial security," at a very high level, we're analyzing corporate structures and practices to identify risks of foreign control or influence, aka FOCI, and other risks of leakage faced by cleared government contractors. These risks come into play because they're dealing with classified and potentially sensitive information. To protect national security interests, the Defense Counterintelligence and Security Agency, or DCSA, requires that government contractors mitigate these risks. We help companies identify the risks, develop mitigation strategies in partnership with DCSA and assist with setting up the legal structures that allow cleared companies to obtain and maintain a facility security clearance or FCL, as well as to safeguard classified information. So that's kind of how my lawyer brain conceptualizes these issues, but today we're talking more about the technology side of things. To set the tone for the rest of this episode, John, could you please provide a general overview of how AI impacts the industrial security space? What are some of the themes that people should be thinking about whenever they consider these issues?

John Metz: Well, first of all, thank you again for having me. I think the right term for what's happening right now — and a concept I'll likely go back to over the course of this episode —is arms race. AI, especially generative AI, offers enormous opportunities for firms to streamline all aspects of how they operate, from the very mundane, like CRM management and some HR functions, to the extraordinarily complex, like novel drug development or, in our case, insider threat and other security-related roles. And when it comes to security, it presents opportunities to streamline and integrate what have historically been relatively siloed areas from supply chain risk management to insider threat, IP protection, online sentiment analysis and much more. So the industrial security world is only beginning to adapt to this new reality in which humans are increasingly being augmented by autonomous AI agents. On the other hand, though, generative AI also makes it easier for bad actors to do bad things: sophisticated deepfakes, autonomously developed malware, online bots spreading convincing disinformation with minimal human involvement. These are all problems that exist in the present, not the future.

Molly O'Casey: So it sounds like AI has the potential to impact a really broad scope of issues. There are some positives, and there's also some negatives. What are some of the problems that AI is being leveraged to solve within the industrial security space? How does it help companies mitigate foreign ownership, control or influence, as well as insider threats and cybersecurity threats?

John Metz: So before I dive into that, one crucial distinction I want to make is between generative and non-generative AI. Both of these are made possible because of this enormous and accelerating advance in machine learning and pattern recognition. But they play fundamentally different roles. So whereas non-generative AI tools are making it increasingly easy to identify patterns and identify the needle in the haystack, the pattern of behavior that's most likely to be associated with a threat down the road — say an insider threat or say anomalous cyber activity — that's a role for non-generative AI. But generative AI really relates to what I think is the actual core of the revolution that's happening right now, which is autonomy. So I'll dive a little into that.

So AI insider threat detection and cyber threat detection tools are a really good example of AI's potential to catch and mitigate threats early. So red teaming has been an SOP, a standard operating procedure in cybersecurity for a very long time. And what that essentially means is that, back in the day, not too long ago, there would have been, for a firm that wanted to test its own cybersecurity measures, an actual team of hackers they would pay to basically try to break into their systems. AI increasingly enables automated red teaming where rather than having a team of people who are directly trying to gain access, to verify that you've got the right procedures in place, you can actually have autonomous AI actors that can do that for you. They can do it a lot faster and they can do at lower cost. So that's a really good example of how autonomy and the ability for AI to act of its own accord is really what's revolutionary here.

So in our case, our corporate economic threat intelligence practice focuses on helping companies be aware of and hopefully prevent foreign state-sponsored espionage associated with efforts, usually by a foreign government acting on behalf of its national champions, you know, the companies that it wants to be the leaders in their field, whether it's auto batteries, EV batteries or drug development, you name it. And what's particularly important for us among other things in preventing these so-called economic aggression risks, is something called IP whitewashing, which is cases where legitimate scholarly research activity results in the unwanted and often illicit transfer of IP. Again, usually foreign governments seeking to promote its own industry at the expense of our customers. So we've been able to build AI agents that use the latest in generative AI technology to autonomously scout out potential IP whitewashing cases, alert customers in real time and, most importantly, to actually make, of their own accord, recommendations about how to proceed. So, broadly speaking, for intel professionals, the ability to use generative AI to produce near-finished intel products in minutes instead of in hours, and in many cases to have them do so autonomously based on alerts that they find, either on the open web or in company systems, the ability do that in minutes instead of hours frees up intel professionals to spend their time on the highest value-add tasks, which for us is the real benefit of AI.

Molly O'Casey: That's really interesting. I didn't consider the involvement of foreign governments in these attacks on sensitive sectors.

John Metz: Yeah, it's absolutely prolific.

Molly O'Casey: Can you explain what red teaming means? I know you provided a quick definition, but it seems to be pretty crucial to how AI functions within cybersecurity.

John Metz: Sure, so just like in wargaming where you've got a blue team and a red team and you want somebody to simulate the role of an adversary, red teaming functionally means that you either have a team or people or a group of AI agents or other autonomous actors doing things like penetration tests and other efforts to simulate an attack on systems. So that's a concept that's most central to cybersecurity, but as we see it, the ability to apply similar concepts to supply chain risk management, to phishing attacks, where rather than having a person or some other kind of legacy software that's producing these simulated phishing emails, we can have AI agents that can generate those, send them out and tailor them to what we think the risk factors might be for an individual. You know, that's a great example of — and I'll likely get into this more over the course of this episode — the interdisciplinarity that I think AI supports.

Molly O'Casey: I guess it goes back to your comment at the outset that this is a bit of an arms race. I suppose your red teaming capabilities are only as good as the technology you have if whoever you're trying to protect against has better technology. I don't want to say it's going to be of limited use, but it's definitely a concern. Is AI impacting companies' approach to security? The benefit of AI seems to be in the scale of data it can process and analyze. Is that making companies' risk identification practices more nuanced?

John Metz: For many it is, and for the rest it should be. So I just mentioned this idea of interdisciplinarity. Techniques like automated red teaming, I think can and should be, and certainly are for us, an inspiration for how firms can improve and do so across their risk profile. So when it comes to FOCI, contract manufacturers and other vendors can be a key sort of risk nexus. So we've seen a potential for AI not just to identify risky upstream links in supply chains, but also to identify alternative vendors and actively make suggestions, not just about whom you shouldn't be doing business with, but with whom you should. So we produced AI agents for customers that can not just traverse their supply chains both upstream and downstream to say, here are the closest links you have to bad actors, that can also say, given that you've got X, Y and Z components you use in your finished product and where those are being sourced from, a more secure supply chain might be retooled in this way. So that's a great example of sort of taking that approach of autonomously simulating threats and applying it in a way that allows companies to be a lot more nimble about how they think about security threats. So something I would add to this is, AI isn't a replacement for subject matter experts. It's a tool that lets them be experts instead of focusing on the grunt work and the pencil pushing that is just an unfortunate reality of working in or around security.

Antonia Tzinova: It's interesting you say that, John, I was just listening to what you said. And I mean, it seems that, you know, the tools available would kind of help make decisions on whether to choose a vendor one over the other based on risks identified. But I just completed our own training on how to use AI responsibly. And one of the key risks identified in using AI was the built-in bias, which results from the data on which the tool is training on. So how do you address this when developing an AI tool that would recommend which vendors to use over others?

John Metz: Sure, so AI large language models, or LLMs, which is a term you'll probably hear me use a bunch more over the course of this conversation, they have enormous training sets. They are trained on human-produced data: written materials, images, anything that a human might produce that might help a LLM with pattern recognition. So those are very large data sets, but they're not all-inclusive. So a challenge when working with LLMs is ensuring that they are able to, in a satisfactory way, answer questions about things that are outside their training set or that are specialized in a way that their training set is not conducive to. So one of the key ways we do that is through retrieval augmented generation, or RAG, where essentially we combine an AI agent or a teammate, as we call them, or a team of AI agents, which we'd call a team, with a definitive knowledge base. Think of this as an index or a vectorized data set, which they can refer back to when answering questions. So essentially the more data you provide, as sort of a definitive knowledge base, the more you can ensure that rather than seeking to confabulate or hallucinate answers to questions they can't answer, you're actually giving them access to the right tools. So in the same vein, beyond RAG, there are all sorts of tools that AI agents can be given that allow them to access particular data sources, say, to make a call to a particular website or a particular software tool to answer questions about compromised credentials, or about shipments, where they can look at bills of lading and see who's shipping which components and where in a given year or in a given timeframe. So all of these are tools that allow LLMs to kind of go beyond the scope of what they were trained on and to do so in a way that allows them to dive really deep into particular issues. So, beyond that, there are things like adjusting the so-called temperature or top K, where you can alter the level of randomness or creativity you get in a response. So all of those things together mean that hallucination issues and built-in biases in AI agents are increasingly being managed. These are a very manageable problem, and increasingly we've seen that we can avoid those issues.

Antonia Tzinova: Thank you. I mean, what I hear you say is like the user has to do some training on catching its own weaknesses and addressing those when using the tool.

John Metz: Well, I think I would say, in some ways, AI agents are best thought of as members of the team. So if you're hiring —

Antonia Tzinova: Try to verify.

John Metz: Sure. And I've been in situations before where, let's say, you're hiring a junior analyst. You wouldn't expect that person to, from day one, be an expert on every aspect of their job.

Antonia Tzinova: I like that.

John Metz: But what you could expect is that with the right training, with access to the right data and the right tools, they could get better and better at their job over time. The difference is that AI can do so in minutes or seconds. That connecting the right databases, the right tools, giving the right context, the learning curve, rather than being months or years, is close to instantaneous, so that's a huge advantage we get.

Antonia Tzinova: I like your analogy. To give it even further in plain speak for lawyers: Treat AI like your junior associate and devote time to train them well.

John Metz: Sure, and our generative AI platform is called Workforce for this very reason. We want people to think about the AI agents that we produce for them or that they build themselves as members of their team that kind of reflect the diversity of roles, responsibilities and knowledge that any of their human team members bring. That analogy is exactly the one that I really want to hammer home.

Antonia Tzinova: Thank you. That's a very helpful explanation.

Molly O'Casey: What are the risks that AI itself present to companies operating within this space? As Antonia referenced, our law firm trainings have begun to include AI over the past few years. I think most companies are becoming more aware that as much as AI presents all these, you know, interesting opportunities, it's also a source of security threat.

John Metz: Sure. So in the last two years, we've seen AI-enabled deepfakes that are convincing enough to make insiders believe their own colleagues were speaking to them in real time. I would actually point to the summer of 2022 when a number of mayors of various European cities, I think it was the mayor of Berlin, and then several of her counterparts had scheduled phone calls with somebody they believed was the Kyiv mayor Vitali Klitschko. They spoke with him for 15 minutes or more, and they didn't realize until after the fact that they'd been speaking to somebody else who had been using deepfake technology, you know, AI-augmented deepfake technology to alter their voice and appearance. We've seen North Korean nationals posing as U.S. citizens or permanent residents to find employment with U.S. companies, often using AI to change their appearance or voices. We've seen leading model producers uncover these extremely sophisticated state-sponsored myths and disinformation campaigns that used LLMs to astroturf political sentiment. We've see insiders at major firms putting company secrets at risk by giving confidential information to ChatGPT, not realizing that it's not stored on their own servers, and were potentially exposing trade secrets. And then most recently we've seen DeepSeek and other AI firms from high-risk countries, mostly China, having this breakout moment in the U.S. and around the Western world as people realize that actually there's a real competition here, it's not just the U.S. that is at the bleeding edge of AI technology. And then, you know, of course, we've seen AI that can autonomously generate and test this extremely advanced and what's called polymorphic malware. The ability to build something so complex and so rapidly adjustable that it's kind of beyond the capabilities of what even a very sophisticated human hacker could do on their own. So returning to this idea of an arms race, these technologies are only getting more sophisticated. And anyone who tells you we've hit the top of the S curve, so to speak, with the capabilities of generative AI, I think it's wishful thinking. So these are all some of the threats that AI can pose, but returning to the basic principle of the lens through which I see things, I think AI is a threat. I think it's also something that is best addressed with better AI.

Antonia Tzinova: If I may comment here, and give it a bit of a twist on other areas that we cover here in the group, I mean, I fully agree, we're not at the top, I think we're at the beginning of this thing. And it developed so quickly that we're talking with the speed of days and months as opposed to years, but I think that the next additive thing would be quantum. And once you add quantum computing, whoever cracks this first probably will have a disproportionate advantage over the other party, right? So for our listeners, I mean, the ones that follow export controls, for example, it's no coincidence that AI has been often grouped together with quantum computing and semiconductors because these seem to be like the three pillars of the next leap in development. Technology development, I mean.

Molly O'Casey: On a less serious note, if I ever get something wrong on a call, Antonia, with a client, it's not me, it's the deepfake.

Antonia Tzinova: It's a deepfake.

Molly O'Casey: Get out of jail free card.

Antonia Tzinova: It wasn't me, somebody really impersonated me.

Molly O'Casey: Considering the rapid advancements in AI, how should firms prepare themselves to handle both the opportunities and risks associated with AI-driven security threats like polymorphic malware and increasingly persuasive deepfakes?

John Metz: Embrace AI is the basic answer. Cannons made medieval castles obsolete, but fortifications became more advanced in response. Airplanes made battleships obsolete, but navies didn't stop building ships. AI tools can be developed to detect and actively respond to threats much more actively, more quickly than humans can. And there are firms out there, like us, that are building the kinds of solutions that will help protect against the threat AI tools compose. From cybersecurity, where automated red teaming can help defuse the threat of AI-generated polymorphic malware, to supply chain and M&A due diligence and more, where AI can not just identify threats, but can also help proactively counteract them. So ultimately, the solution here can't be pretending that AI isn't the future or avoiding its use altogether, it's relying on AI tools that can help you win that arms race.

Antonia Tzinova: Can you give us some practical examples of how your AI tools deal with cybersecurity or supply chain risk or M&A due diligence? And just for full disclosure, we have collaborated already on a couple of transactions, but it would be interesting for, I think, listeners to kind of visualize with some practical examples.

John Metz: Sure. So, on the corporate side, we work closely with customers who are concerned about threats to their IP, threats to their supply chains, threats from insiders who are coerced or induced to act on behalf of a foreign government. All of those are areas where we can use AI tools to identify potential risks early on and, in many cases, to actually produce a first pass at an intel product that's useful for our counterparts and customers. So for instance, on the M&A side, producing detailed reports on companies' cyber risk posture, identifying autonomously the vulnerabilities in their systems and relaying those back to us, for our analysts, who can then produce a detailed report covering those risks on the supply chain side, identifying which components are being shipped and where. Identifying alternative sources of those, when it comes to insider threat AI is sophisticated enough to produce a good first pass at an insider threat report and produce a threat score on an individual. And then it's up to human analysts to validate that information and to determine what should or shouldn't be a cause for further investigation. So all of these things free up time for human analysts to focus on the big-picture issues and to focus on thinking about where the puck is going next in terms of nation-state threats, in terms of geopolitical issues that might affect who is targeting them and why. So really it's freeing up human labor by augmenting it with AI.

Antonia Tzinova: Thank you, that's important to know how it can be risked, particularly with respect to supply chain risk. I think any company is vulnerable out there.

Molly O'Casey: John, do you think AI will have a structural impact on security clearances? Will AI drive modifications through the traditional process? And is that going to be geared more towards cleared contractors or the regulators?

John Metz: So definitely a question worth asking, particularly as the average time to adjudicate a TS clearance, I believe, is continuing to rise. So right now a team of AI agents, speaking from experience, armed with nothing but a birthdate, a name and an identifying address or a name of a past employer or a phone number can go out and autonomously scrape social media profiles, track dark web activity, document geo points to help map out a person's location over time and even interact directly with an analyst or investigator's browser to show them in real time where it's gathering information and what it's taking away from it. So that's a process that would take a human many, many hours, and AI agents can complete it in minutes and produce a high-quality intel product. So there's clear potential here for AI to help, you know, find the needle in the haystack, the wheat in the chaff. Now all that being said, there's an old quote from an IBM presentation I think in the '70s: "A computer can never be held accountable, therefore a computer must never make a management decision." I think this is a key takeaway about how AI should or shouldn't be used in companies' processes. What I don't want to see happen is companies thinking that because they have AI agents analyzing a problem that they don't need to worry about thinking strategically. I don't want folks to think that because they have AI that can identify supply chain risks that they don't need to be concerned about some of the trends, in what some have called de-globalization, we're seeing where it's becoming increasingly difficult, particularly for tech companies in this country, to have sort of the high-margin design work centered in the U.S. and the lower-margin manufacturing work centered in China or elsewhere in east Asia, where they're exposed to contract manufacturers that are often up to no good, or they're exposed to rivals who, supported by their own national governments, are seeking to acquire their IP. So, the role of AI fundamentally, I think, is to free up that human effort and to allow people to work in more streamlined ways. So I think when it comes to security clearances, there's absolutely potential. And I would like to see this happen at some point for AI to help drive the investigation process. And at the same time, I think that it's a case where there will need to be really serious efforts to ensure that it's not something that is replacing the accountability of an investigator or the manager.

Antonia Tzinova: It's interesting you say that, and I want to pick on this quote that you mentioned that a computer must never make a management decision. I mean, I recently read that relying on AI to perform certain tasks numbs the mind in becoming more and more reliant on the AI, too, and literally doubting the ability to perform the tasks oneself. How do we protect against this? At a company level, but also at the individual user level.

John Metz: Yeah, so it's a concern I share in certain contexts. Anyone who's following what's happening in universities and schools with increasingly widespread ChatGPT use to cheat on homework or on tests, it's concerning. And that's part of the reason I keep coming back to this idea of using AI to free humans up rather than using AI to replace that sort of higher order thinking. I've seen from my first-hand experience how AI that can autonomously gather the kind of intelligence that any analyst would dream of having, but doesn't really have the time to collect or the capabilities to collect on their own, can allow them to identify patterns that they otherwise wouldn't see. So there's a balance here between using AI effectively to gather intel and ensuring that AI doesn't kind of replace that higher order thinking, but I think that the way the industry is evolving, I think we're going to see a lot more positives than negatives here in terms of how subject matter experts can expand their own and augment their own capabilities with AI.

Molly O'Casey: Thanks, John. Any parting thoughts?

John Metz: I would just go back to AI is not a replacement for humans. It's a member of your team.

Antonia Tzinova: And I actually, I'm feeling really positive about that. And what I hear you say repeatedly, and what I'm taking away from this one, is free up rather than replace, which I think is true for all human history, right, and the development of human society. So I like that a lot.

John Metz: Yeah, absolutely. I mean, the introduction of the assembly line didn't make humans unimportant in manufacturing. It led to more jobs in all sorts of areas that folks 200 years ago couldn't even have imagined, and I think AI is going to be similar. I think we will see the human's ability to interact with the world around them and to change the world around them is going to increase dramatically, and I think by using AI effectively, we can ensure that that change is positive rather than negative. That it's helping protect companies' assets rather than making it easier to expose them, that it is helping teams, critical asset protection teams, become increasingly sophisticated rather than allowing their adversaries to swamp them with mis- or disinformation or to break more easily into their systems. I think embracing this proactively is the best thing that any of us can do to ensure that the future is brighter than the past.

Molly O'Casey: Sounds like some positives, some negatives, but definitely something that can't be ignored. Thank you so much for coming on, John. We really appreciate you as a guest speaker, and thank you, Antonia. As always, it's good to have you on.

Antonia Tzinova: Thank you, Molly.

John Metz: Of course. Thank you for having me.

Molly O'Casey: This series is full of acronyms. This week we had some slightly new ones with large language models or LLMs, retrieval augmented generation or RAG, customer relationship management, CRM, and a few classics like foreign control or influence or FOCI and Defense Counterintelligence and Security Agency or DCSA. Each episode we ask our speaker to explain an acronym that featured the episode with wrong answers only. John, would you like to choose an acronym?

John Metz: Sure, so for LLM, I'm stuck between limited liability machine, which is my vision for an AI language model that cannot be held liable for the advice it gives you, looking lost mostly, which was me when I first started trying to build with AI, and for the American lawyers in the audience, LL.M. is also the degree you get when you're too masochistic to stop at a J.D.

Molly O'Casey: Or, funnily enough, whenever you've been educated abroad, so I don't actually have a J.D., I have an LL.M.

John Metz: Yes.

Antonia Tzinova: And I mean, sorry, but I have to add mine here, Molly. I thought that John might pick this one, but since he hasn't, I'm going with AI, already influenced.

Molly O'Casey: Amazing.

John Metz: I like that one.

Molly O'Casey: Thanks, Antonia. Well, with that, I hope everyone has a great week.

Related Insights