If there ever was an environment that could benefit from the promises of artificial intelligence and machine learning advances in cybersecurity, it's operational technology (OT) and industrial control systems (ICS). After all, plenty can go wrong with rules-based systems—most notably when new malware and software exploits emerge—attacks often can slide past defenses built upon signatures.
Such slips are not acceptable when defending against attacks targeting manufacturing, energy production, healthcare, transportation, and other critical systems where cyber-physical systems dominate the environment. Yet, the rapid digital transformation of OT/ICS environments has dramatically increased the attack surface for most organizations. Whether it's AI/ML enhanced and Internet-connected control systems, intelligent manufacturing, the proliferation of connected medical devices, the shift to cloud management of these systems, all are leading to the rapid convergence of OT/IT systems.
Consider the manufacturing industry. According to a recent management consulting firm McKinsey & Company report, intelligent manufacturing can create upwards of $3.7 trillion in value by 2025. With those rewards, however, come substantial risks associated with digital convergence, increased numbers of IIoT devices, increased connectivity among partners and suppliers, and increased automation.
Where business leaders see more value, security leaders see an expansive attack surface: more networked traffic and potentially vulnerable devices and endpoints attackers can target and perhaps infiltrate. To successfully defend these environments will require more human or machine eyes to monitor these devices. This is where AI will hopefully prove its value.
The experts we spoke with said there are several ways AI/ML is currently being used to protect OT/ICS systems, hopefully improving the effectiveness of security teams and defenses.
"AI accelerates the learning of baseline traffic, and AI/ML systems can help detect unusual behavior and patterns within assets and networks," says Itay Glick, VP of products at cybersecurity firm OPSWAT. He explains that unusual activity can be a sign that the environment is compromised or something else is awry, such as misconfigured networked systems. "Solutions that rely on signatures are problematic and will not protect you from the next attack," he adds.
Harman Singh, director at cybersecurity services provider Cyphere, agrees and notes that AI systems can instantly adapt in real-time as the network grows and changes. "AI figures out patterns and behaviors on the fly, making it much better at spotting when something isn't right in the traffic." Further, AI can take advantage of, and learn, from the vast data the typical enterprise generates from its connected devices. "And it can spot patterns no one has told them about," says Singh. "They can predict when the network might have problems in the future. With their speedy data-processing skills, AI/ML systems make monitoring network traffic much more accurate and efficient," adds Singh.
Of course, just identifying threats isn't enough. Enterprises need to respond to them. Security teams can maximize their abilities when enterprises combine continuous AI monitoring with automated response, such as with their SIEM or SOAR systems. "AI can help organizations better understand the patterns in these environments, especially at the points where OT and IT systems meet," says Scott Crawford, information security research head of S&P Global Market Intelligence.
With their speedy data-processing skills, AI/ML systems make monitoring network traffic much more accurate and efficient.
Many of the gains, however, will be through using AI to help automate the mundane. "Think log analysis, configuration analysis, vulnerability management lifecycle," says Crawford.
Of course, successfully using AI/ML to manage risks does have to be done right, and there are plenty of challenges getting there. Getting it right requires the proper foundations to be built.
One of the most common challenges is obtaining the correct data to train the AI/ML systems. "AI systems work by being trained on data pertinent to the problem. However, businesses frequently struggle to feed their AI algorithms with the correct kind or quantity of data because they lack access to it or it isn't currently available," says Jan Chapman, co-founder and managing director of MSP Blueshift. "When using your AI system, this imbalance may produce inconsistent or discriminatory outcomes. You can avoid this bias problem by making sure you use representative and high-quality data," Chapman says.
"The availability and quality of data, in my opinion, is one of the most pressing issues organizations face when trying to maximize the potential of their AI/ML systems," says Joshua Spencer, founder of healthcare AI specialist BastionGPT. "Effective machine learning depends on having access to a large and pertinent dataset. Without it, even the most advanced algorithms may have trouble making precise observations or projections," says Spencer.
Aligning AI outputs with business goals is another noteworthy challenge. "Careful calibration and a thorough comprehension of the problem domain are necessary to ensure that the AI solutions produced meaningfully contribute to the organization's goals," adds Spencer
Finally, training and maintaining AI/ML systems can be costly. "They need lots of computer power and skilled people to keep them working well. They also need good, clean data to learn from, and that's not always easy to get. Ensuring accurate data is super important for getting good results," says Cyphere's Singh.
While these challenges are steep, they're not insurmountable. Enterprises with significant OT/ICS systems to secure can benefit from AI/ML—but they have to decide upfront the security and business objectives and invest in getting the correct data clean and properly trained. "Testing and checking are super important too so that the AI/ML system doesn't make mistakes," says Singh.
While precise numbers regarding AI/ML security investments in OT/ICS systems are tough to come by, more broadly, AI/ML investments are expected to go from $17.4 billion in 2022 to $103 billion by 2032. Let's hope enterprises are testing and watching these systems — at least if they want to be sure they’re getting the value they want from these investments.
George V. Hulme is an award-winning journalist and internationally recognized information security and business technology writer. He has covered business, technology, and IT security topics for more than 20 years. His work has appeared in CSOOnline, ComputerWorld, InformationWeek, Security Boulevard, and dozens of other technology publications. He is also a founding editor at DevOps.com.