Cyber Resilience
Risk Management
Vulnerability Management

Nexus Podcast: Diana Kelley on Securing AI Systems

Michael Mimoso
/
Jun 12, 2024

Machine learning and artificial intelligence have been staples of entities that analyze large data sets to spot patterns and make predictions about future performance. Financial services and pharmaceutical companies are two familiar examples. While some of these companies may have thousands of models running that inform these systems, machine learning may not have been on the radar of cybersecurity teams until fairly recently. 

“In some ways at some organizations, it’s been operating almost as a form of shadow IT,” said Diana Kelley, chief information security officer at ProtectAI—a cybersecurity company focused on securing AI systems—on this episode of the Nexus Podcast. “Now we have [generative AI] to protect, it’s also helped put some light on what’s been happening with deployments of machine learning. Organizations are seeing this is an area in many companies where they can get better about the rigor for governance.”  

Subscribe and listen to the Nexus podcast on your favorite platform.

Kelley, who has held security leadership positions at Microsoft, IBM, Symantec, and other major vendors, said that the emergence and popularity of ChatGPT was a big inflection point for the growth and adoption of AI. Its rapid adoption—and of other third-party models and tools—forced companies to look at the security of the models being trained, as well as the integrity of algorithms, and data being fed into them in order to refine and improve their accuracy. 

“The biggest questions are: 'Can we see, know, and manage what’s happening within our AI lifecycle',” Kelley said. “'Can we understand what we’re using, how we’re using it, and where those systems are vulnerable?'”

Kelley’s company sells products that secure the AI lifecycle, from scanning third-party commercial and open-source models for vulnerabilities, identifying threats and risks within machine learning applications, to analyzing prompts made to large language models (LLMs) to determine whether sensitive data may be leaking out, or the prompt is malicious. Many users are buying commercial machine learning and AI models or building their own, often with open source models as a foundation. Kelley said there is an opportunity in these cases in introduce a version of DevSecOps, known as MLSecOps, to figure out there secure development practices fit within the machine learning development lifecycle. 

“These are software assets; this is software. This is math. It’s not magical unicorns running in our systems,” Kelley said. “But it’s a different kind of software asset and you need to think about threats in a different way.”

For example, production data must be used to train these advanced models, which is contrary to a DevSecOps approach, which would always opt for synthetic data or tokenized data to stress test systems and applications. 

“You need to have good data sets in order to train those models to do what you need it to do,” Kelley said. “So how do you get that data, where is that data from, who was training that model; these are questions that are not standard in regular software development but they are unique to AI.

“It’s a new kind of software asset and approach that is unique to itself. Organizations have to look at those unique risks to build security into that unique lifecycle,” Kelley said. 

The key here at the outset, Kelley said, is that security leaders and machine learning engineers understand their domains, and need to culturally cross over. For security teams, that means understanding how to threat model and red team with a machine learning mindset, while developers and engineers must meld with security teams to understand where the risks are.

“We’ve got the skills. What we’ve got to do is bring those groups together so we can have that knowledge transfer so we can have those skills cross,” Kelley said.

Cyber Resilience
Risk Management
Vulnerability Management
Michael Mimoso
Editorial Director

Michael Mimoso is Director of Influencer Marketing at Claroty and Editorial Director of Nexus.

Stay in the know

Get the Nexus Connect Newsletter

You might also like…

Read more

Latest on Nexus Podcast